if word in list:
whatever(word)
That’s a fairly common test, and there isn’t a One True Solution for it in Perl, with TMTOWTDI and all that.
Offhand, I decided to use Benchmark to determine what the best way to do the thing. I decided to try to make a determination by creating an array of X elements, creating a smaller array of Y elements to see if those elements were in that first array, and iterating over them Z times.
It’s looking like the first Z, 400 iterations, is large enough for most of the problems, so now I’m looking at filling this:
| smaller array \ larger array | 1000 | 10000 | 10000 |
|---|---|---|---|
| 5 | ? | ? | ? |
| 50 | ? | ? | ? |
| 500 | ? | ? | ? |
So, what is being tested? The following pseudocode explains the general method,
make an array of so many even numbers
make an comparison array of so many numbers
for each value in the comparison array
check if that value is in the array
store that result in a hash table
Here, I’m using first from the fan-favorite module, List::Util. It’s funny; for all the talking up I do for min and max and sum0, there’s a whole lot I just don’t use.
Note: I use no warnings in this because Perl throws a warning if first doesn’t find the thing and returns undef.
use List::Util qw{ first };
my @array = map { $_ * 2 } 0 .. ( 10 * $count );
my @comp = 1 .. $count;
my %is_in_list;
for my $i (@comp) {
no warnings;
my $first = first { $_ == $i } @array;
$is_in_list{$i} =
defined $first ? 'true' : 'false';
}
Looking further, I’m thinking that any might be a better named choice, but benchmarking them against each other proved that they’re equivalent for time.
grep is a functional subroutine I used well before I had any real understanding of what functional programming is, and here, I’m using it to tell
my @array = map { $_ * 2 } 0 .. ( 10 * $count );
my @comp = 1 .. $count;
my %is_in_list;
for my $i (@comp) {
$is_in_list{$i} =
( grep { $i == $_ } @array )
? 'true' : 'false';
}
Hash has been my preferred solution for a long while. Here I check everything once and store it in a hash, so that every subsequent test is instantaneous.
my @array = map { $_ * 2 } 0 .. ( 10 * $count );
my @comp = 1 .. $count;
my %array = map { $_ => 1 } @array;
my %is_in_list;
for my $i (@comp) {
$is_in_list{$i} =
defined $array[$i] ? 'true' : 'false';
}
| checking n values | First | Grep | Hash |
|---|---|---|---|
| 5 | 1 | 2 | 5 |
| 50 | 6 | 11 | 5 |
| 500 | 46 | 97 | 5 |
Benchmark warned that these few elements might be unrepresentative, so let’s look at the next one.
| checking n values | First | Grep | Hash |
|---|---|---|---|
| 5 | 13 | 19 | 103 |
| 50 | 150 | 257 | 80 |
| 500 | 1045 | 1887 | 57 |
And a chart generated in Google Sheets:

The value stored for these charts is using wallclock seconds, so, for the time to see which elements are in the array using Grep to check for 500 values took 1887 seconds, which is over a half hour., while again, while the Hash version is, within the context of the same array, essentially constant, while the other are exponential.
The larger the list you’re working with, the better it is to keep track of list membership if that’s a thing you need, but for smaller lists, it would be better to use Any or First or Grep.
But data size explodes, so planning ahead might be smart, making Hash a valid choice for small arrays and the best choice for bigger ones.
Which vindicates my choices, which feels good.
========================
count: 400
array: 1000
comp: 5
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 1 wallclock secs ( 0.33 usr + 0.00 sys = 0.33 CPU) @ 1212.12/s (n=400)
(warning: too few iterations for a reliable count)
Grep: 2 wallclock secs ( 1.08 usr + 0.03 sys = 1.11 CPU) @ 360.36/s (n=400)
Hash: 5 wallclock secs ( 1.95 usr + 0.00 sys = 1.95 CPU) @ 205.13/s (n=400)
========================
count: 400
array: 1000
comp: 50
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 6 wallclock secs ( 2.26 usr + 0.00 sys = 2.26 CPU) @ 176.99/s (n=400)
Grep: 11 wallclock secs ( 4.57 usr + 0.02 sys = 4.59 CPU) @ 87.15/s (n=400)
Hash: 5 wallclock secs ( 1.36 usr + 0.00 sys = 1.36 CPU) @ 294.12/s (n=400)
========================
count: 400
array: 1000
comp: 500
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 46 wallclock secs (17.81 usr + 0.11 sys = 17.92 CPU) @ 22.32/s (n=400)
Grep: 97 wallclock secs (43.08 usr + 0.12 sys = 43.20 CPU) @ 9.26/s (n=400)
Hash: 5 wallclock secs ( 1.70 usr + 0.05 sys = 1.75 CPU) @ 228.57/s (n=400)
========================
count: 400
array: 10000
comp: 5
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 13 wallclock secs ( 4.03 usr + 0.03 sys = 4.06 CPU) @ 98.52/s (n=400)
Grep: 19 wallclock secs ( 6.51 usr + 0.03 sys = 6.54 CPU) @ 61.16/s (n=400)
Hash: 103 wallclock secs (43.47 usr + 0.28 sys = 43.75 CPU) @ 9.14/s (n=400)
========================
count: 400
array: 10000
comp: 50
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 150 wallclock secs (55.47 usr + 0.25 sys = 55.72 CPU) @ 7.18/s (n=400)
Grep: 257 wallclock secs (105.11 usr + 0.28 sys = 105.39 CPU) @ 3.80/s (n=400)
Hash: 80 wallclock secs (30.44 usr + 0.25 sys = 30.69 CPU) @ 13.03/s (n=400)
========================
count: 400
array: 10000
comp: 500
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 1045 wallclock secs (448.34 usr + 1.36 sys = 449.70 CPU) @ 0.89/s (n=400)
Grep: 1887 wallclock secs (837.35 usr + 2.70 sys = 840.05 CPU) @ 0.48/s (n=400)
Hash: 57 wallclock secs (30.45 usr + 0.06 sys = 30.51 CPU) @ 13.11/s (n=400)
========================
count: 400
array: 100000
comp: 5
Benchmark: timing 400 iterations of First, Grep, Hash...
First: 117 wallclock secs (55.00 usr + 0.27 sys = 55.27 CPU) @ 7.24/s (n=400)
Grep: 176 wallclock secs (75.20 usr + 0.41 sys = 75.61 CPU) @ 5.29/s (n=400)
Hash: 899 wallclock secs (379.11 usr + 1.31 sys = 380.42 CPU) @ 1.05/s (n=400)
#!/usr/bin/env perl
# elements_in_list.pl
use strict;
use warnings;
use experimental qw{ say signatures state fc };
use Benchmark qw{ :all };
use List::Util qw{ first };
# originally wrote this to switch between
# different iteration counts and different
# array sizes, but the sweet spot for
# learing is 400 iterations and an array of
# 10,000 elements.
for my $count (qw{ 400 }) {
for my $array (qw{ 10000 }) {
for my $comp (qw{ 5 50 500 }) {
say <<~"END";
========================
count: $count
array: $array
comp: $comp
END
timethese(
$count,
{
'First' => sub {
my @array = map { $_ * 2 } 0 .. ( 10 * $array );
my @comp = 1 .. $comp;
my %is_in_list;
for my $i (@comp) {
no warnings;
my $first = first { $_ == $i } @array;
$is_in_list{$i} =
defined $first ? 'true' : 'false';
}
},
'Grep' => sub {
my @array = map { $_ * 2 } 0 .. ( 10 * $array );
my @comp = 1 .. $comp;
my %is_in_list;
for my $i (@comp) {
$is_in_list{$i} =
( grep { $i == $_ } @array )
? 'true'
: 'false';
}
},
'Hash' => sub {
my @array = map { $_ * 2 } 0 .. ( 10 * $array );
my @comp = 1 .. $comp;
my %array = map { $_ => 1 } @array;
my %is_in_list;
for my $i (@comp) {
$is_in_list{$i} =
defined $array[$i] ? 'true' : 'false';
}
},
}
);
}
}
}
exit;
I’m working on a lot of things and can’t work up the creativity to come up with the facts about 258 today. Incidentally, if you’re looking for an experience Perl guy, ways to contact me are listed below.
Submitted by: Mohammad Sajid Anwar You are given a array of positive integers, @ints.
Write a script to find out how many integers have even number of digits.
I wasn’t trying to “victory lap” this one, really, but on first glance, I had this.
Perl variables are simultaneously strings and numbers. We use variable overloading, not operator overloading, so if you do a math thing on a string, Perl will find the most number-y take on that variable, and if you do a string thing, it’ll treat it as such.
If you want to find the length of a string, use length.
If you want to find an even number, use modulus, or %.
If you want to only pass only the number with an even number of digits, use grep { ( length $_ ) % 2 == 0 }. The parentheses are important because otherwise, it’ll try to get the length of $_ % 2 .
If you want to find the length of an array, use scalar.
So, scalar grep { ( length $_ ) % 2 == 0 }. You could probably golf that down a lot, but to me, this is the minimal readable size of this solution.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
my @examples = (
[ 10, 1, 111, 24, 1000 ],
[ 111, 1, 11111 ],
[ 2, 8, 1024, 256 ],
);
for my $example (@examples) {
my $output = scalar grep { ( length $_ ) % 2 == 0 } $example->@*;
my $ints = join ', ', $example->@*;
say <<~"END";
Input: \@ints = ($ints)
Output: $output
END
}
$ ./ch-1.pl
Input: @ints = (10, 1, 111, 24, 1000)
Output: 3
Input: @ints = (111, 1, 11111)
Output: 0
Input: @ints = (2, 8, 1024, 256)
Output: 1
Submitted by: Mohammad Sajid Anwar You are given an array of integers, @int and an integer $k.
Write a script to find the sum of values whose index binary representation has exactly $k number of 1-bit set.
I use sprintf a fair amount, as an easy way to left-pad numbers and a way to cut long floating point numbers to a more usable set of significant digits, but it also, with '%b', converts a number to binary.
split // splits between every character, so combining them turns 5 into [1,0,1].
List::Util has sum0, but I suppose I could’ve nodded to the previous task and written scalar grep { $_ == 1 } instead. Alas.
I could do it in a more functional way. I actually wrote it.
return sum0 map { $_->[1] }
grep { $_->[2] == $k }
map { [ $_, $ints[$_], sum0 split //, sprintf '%b', $_ ] } 0 .. $#ints;
But that’s about the size of the iterative version, and it’s much less readable to me.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
use List::Util qw{ sum0 };
my @examples = (
{
ints => [ 2, 5, 9, 11, 3 ],
k => 1
},
{
ints => [ 2, 5, 9, 11, 3 ],
k => 2
},
{
ints => [ 2, 5, 9, 11, 3 ],
k => 0
}
);
for my $example (@examples) {
my @output = sum_of_values($example);
my $ints = join ', ', $example->{ints}->@*;
my $k = join ', ', $example->{k};
my $output = join ', ', @output;
say <<~"END";
Input: \@ints = ($ints), \$k = $k
Output: $output
END
}
sub sum_of_values ($obj) {
my @ints = $obj->{ints}->@*;
my $k = $obj->{k};
my $output = 0;
for my $i ( 0 .. $#ints ) {
my $s = sum0 split //, sprintf '%b', $i;
$output += $ints[$i] if $s == $k;
}
return $output;
}
$ ./ch-2.pl
Input: @ints = (2, 5, 9, 11, 3), $k = 1
Output: 17
Input: @ints = (2, 5, 9, 11, 3), $k = 2
Output: 11
Input: @ints = (2, 5, 9, 11, 3), $k = 0
Output: 2
For a lot of modules, they have a Makefile. Even if they don’t have anything to compile.
The trick there is with make dist, which creates a distribution. And, in terms of PAUSE, the way that modules enter CPAN, a dist is a distribution, which is a file taking the form $MODULE_NAME-$VERSION_NUMBER.tar.gz, which of course you could make yourself, but by using a makefile, you consistently do the right thing.
Then, you log into PAUSE and upload it. There are means to add this to your GitHub Actions, which I might do for some projects. The thing that hits me is that unless $VERSION_NUMBER changes, and changes up, PAUSE doesn’t do anything with it. Does it make it “safe” and “okay” to make a bunch of builds when you’re just adding testing?
Anyway, note to self.
But I’m told it’s good for me, and here I am, adding to something I’ve successfully run tests on before, on both services.
name: run perl ubuntu
on:
push:
branches:
- "*"
pull_request:
branches:
- "*"
jobs:
perl-job:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
perl-version:
[
"5.38",
"5.36",
"5.34",
"5.32",
"5.30",
"5.28",
"5.26",
"5.24",
"5.22",
"5.20",
"5.18",
"5.16",
"5.14",
"5.12",
"5.10",
"5.8",
]
container:
image: perldocker/perl-tester:$
steps:
- uses: actions/checkout@v2
- name: Regular tests
run: |
cpanm --notest Test::Builder Test::More IO::Pty IO::Tty
cpanm --installdeps --notest .
perl Makefile.PL
make
make test
I’m hardcoding the dependencies and test modules in the first line of my run above. In the fullness of time, I might pull those and see what’s going on. I think I know the problem that I was having that made me want to pull them, and that wasn’t about dependencies.
I am jazzed that I can test this module as far back as 5.8. I had tested to 5.10 on Travis before, so when I was getting the perl-version thing going, I decided to go as far back as I deemed even vaguely reasonable. Thanks to perldocker on DockerHub, create by the Perl and Raku Foundation.
Also thanks to Gabor Szabo’s PerlMaven and the five-year-old GitHub Action examples I’m heavily cribbing from.
I think I’ll only be able to test on MacOS against macos-latest and the Perl it provides, but that’s fine.
But, while I had this going, I jumped to the Appveyor Windows-specific script, and found that it was trying to do MSBuild on it, like through Visual Studio, and that’s not necessary.
(It makes no sense to get a tool or service working again when I’m planning to replace them, but it makes me feel like I’m accomplishing a thing.)
But once that was fixed, I found another error. cpanm couldn’t build a dependency, but the build log was on the far side of the service, so I tried locally, and saw this in the build log.
OS unsupported at Makefile.PL line 6.
IO::Tty doesn’t even try to build on Windows. So whatever positive tests I got years ago were false positives, and my choices are to live with it or make IO::Tty work with Strawberry Perl.
So I have that going forme.
Most of the time, for this blog, I write it in VSCode, with a Markdown Preview window opened in the right half of the editor window. If you see green in my GitHub heatmap these days, there’s a good chance that I’m committing and pushing with VS Code’s default Git integration. It is good for commit and push, but often you want to understand the greater application, and honestly, this blog is all commits to master and pushes to origin.
I decided to clone down RLangSyntax, which provided syntax highlighting for ActiveState’s KomodoEdit, my preferred editor after UltraEdit and before SublimeText. At that time, I was starting to do more R development, both for work and for personal projects, and KomodoEdit’s lack of R syntax highlighting annoyed me, so I found an existing abandoned project, copied it, put it on GitHub, added the GPL to the repo and explained old maintainer’s licensing in the README, and went forward.
Turns out, the core issue was that there’s an RDF file that the editor reads, and needed em:targetApplication->Description->em:maxVersion to be equal to or higher than the version number of the existing editor. I suppose I can understand not wanting to release it to editor versions that don’t exist yet, but if I had put an abstractly-large version number into the maxVersion fields, I wouldn’t need half the commits I made.
So, as I mentioned, I started developing in other editors, but when I noticed that KomodoEdit has a new version, I would update the application and make a new release.
Until 9.3, at least. That’s when I found that KomodoEdit 9 had R syntax highlighting and nobody needed this project anymore. I updated the README to say So Long And Thanks For The Fish and forgot about it.
I figured there’s just enough of a history that it would be worth looking at in GitLab, and the minimal outsider interaction would mean it’s okay to show. It’s my tent, it’s my clowns, so it’s my circus, so to speak.

You can see several forks that got pulled into master from another user, this one fixing a misspelling that had slipped past me.
But the graph visualization, showing the tags that indicate points when I packaged for releases and the forks where Sergey cleaned up after me, is very interesting. I could definitely see value in this in projects with longer history, more users, and where I would be responsible for integrating pull requests into forks, forks into branches, and branches into production.
(Note to self, start using “production” instead of “master” for default branch, and look up other people’s best practices for branches.)
After a while of being unaware that another UI icon showed up in my editor and a little bit of distrust, I’m tentatively “that’s cool” about GitLens. Has anyone else tried it?
I am a long-time user of VS Code. So long, I found a bug in VS Code in 2018. At that time, I was working out of Ubuntu Linux and using SSHFS under FUSE to have the files and directories for research computing available to me on my local machine.
I did this almost entirely so I could open these files in VS Code, and opening these batch files from the command line was the bug.
If I was working in that position today, I would not need to do that, because today (and I am not sure how long this capability has existed), VS Code supports Remote Development. I think it started so that people like me could code within their WSL environments without having to jump through hoops like I had to in 2018.
Basically, there’s a separation between the frontend code and the backend, and they made it so that the communication between the two is abstracted. This means that the backend code can be set up and run invisibly to the user. With WSL, that communication doesn’t leave the computer, but VS Code is an SSH client that uses your .ssh/config, so you can even use ProxyJump to access machines not directly accessable.
This also means that you can use VS Code with Docker containers and do Peer Programming remotely with Live Share.
Additionally, VS Code has an integrated terminal, so when you’re connected to the remote machine, you have a prompt there. At that time, I was working through AWS Workspaces on Window Server Datacenter, which didn’t allow me to install MSI packages, meaning that I couldn’t use Windows Terminal, so VS Code ended up being the terminal I used almost all the time.
The terminal list is customizable, so if you want the ability to use Nushell or Node or Python as a REPL, that’s doable.
Early in my time with VS Code that when I opened up the editor, I got windows with a purple bar at the bottom.

I was never in love with the color, and when, by accident, I opened things differently, I got a blue bar at the bottom.

It took a while, and a discussion with genehack to understand that the second window is different because it is a workspace.
Workspaces mean that, when you close and open them again, the same files will be open. I believe the terminal history will also be retained.
They are also customizable. VS Code is very customizable, and those customizations are stored and are editable as JSON, but you can also specify customizations for workspaces. As an example, imagine you’re working on a group project outside your normal development, and there are differences, such as indent width, that are specific to that project.
In my talk, I created a workspace with the following settings. Several are definitely about display: changing font size and demonstrating ligatures in modern development fonts were part of what I wanted, but I will point out terminal.integrated.defaultProfile.windows. When I open a window in my default environment, it defaults to a bash shell in WSL, but here, I change that to PowerShell, which find bash-like enough for most shell purposes.
{
"editor.fontSize": 12,
"editor.fontLigatures": true,
"editor.renderWhitespace": "boundary",
"editor.tabSize": 8,
"editor.wrappingIndent": "deepIndent",
"editor.defaultFormatter": "bscan.perlnavigator",
"terminal.integrated.fontSize": 12,
"terminal.integrated.defaultProfile.windows": "PowerShell"
}
The customization also works with Remote Environments, so you can specify most anything for windows running over SSH. I commonly use different themes for different hosts, to help me to know where I am. Additionally, you can create and use different profiles. Looking at the “Create New Profile” screen shows templates for Python, Angular, Doc Writer, Data Science, Node, and two Java-specific templates.
Also, within each context setting, you can specify things by language. Below is a subset of my default settings.json.
{
"[css]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[html]": {
"editor.fontFamily": "'Iosevka Term', 'Fantasque Sans Mono', Consolas, 'Courier New', monospace",
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.fontSize": 18,
"editor.fontFamily": "'Iosevka Term', 'Fantasque Sans Mono', Consolas, 'Courier New', monospace"
},
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.fontSize": 15,
"editor.fontFamily": "'Iosevka Term', 'Fantasque Sans Mono', Consolas, 'Courier New', monospace"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[markdown]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[perl]": {
"editor.defaultFormatter": "bscan.perlnavigator"
},
"[python]": {
"editor.defaultFormatter": "ms-python.python",
"editor.formatOnType": true
},
"[yaml]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
These examples show mostly the font family, font size and default formatting engine for each language, but with Python, there’s editor.formatOnType, which formats the text while you write it. There are also options for formatOnSave and formatOnPaste, which start the formatting on other events.
So, VS Code can behave differently based on workspace, on environment, on profile and on language.
And the custom behavior can involve functionality. So many of the language examples mention Prettier, a code formatter for Javascript, JSON, CSS, HTML, Markdown, YAML and a number of other languages. I mentioned that VS Code can work with Docker, and that is done via the Docker extension.
Using VS Code as an SSH client to connect to remote hosts comes from the Remote - SSH extension, and there’s also Remote Explorer, jumping directly to the workspaces on remote hosts.
There’s an extension I find powerful called RainbowCSV, which makes each column of a comma-separated file a different color, making them easier to read.
Not related to extensions, but VS Code is a Git client, so you can use it to connect to GitHub or Bitbucket directly, without the use of command-line Git tools or GitHub desktop. Most of the time, when I add to this blog, I write the markdown using the built-in Markdown Preview and commit and push from within VS Code. They’ve recently added GitLens. I’ve barely touched it, but it seems like a useful tool for working with a large, long-lived repository with multiple contributors.
There is more to say about VS Code, about the formatters/linters and syntax highlighters that exist for most languages, extensions that allow you to treat VS Code like vim, debuggers and the like. Ultimately, the best thing about VS Code is the community, which creates tools to make it easier to do the things you need to do.
257 is the country code for Burundi, but is not currently assigned to the North American Numbering Plan, but is scheduled to be assigned to British Columbia in 2025.
Submitted by: Mohammad Sajid Anwar
You are given a array of integers,@ints.Write a script to find out how many integers are smaller than current i.e.
foreach ints[i], count ints[j] < ints[i] where i != j.
The key thought here is that we’re dealing with less than, not less then or equal to. I added code to remove the current value from the table, but any number i is not going to be less than itself, so grep { $_ < $i } will always pass by $_ == $i. Easy to handle in a loop, but I wrote a very functional solution. Nested functional, in fact.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
my @examples = (
[ 5, 2, 1, 6 ],
[ 1, 2, 0, 3 ],
[ 0, 1 ],
[ 9, 4, 9, 2 ],
);
for my $example (@examples) {
my @output = smaller_than( $example->@* );
my $input = join ', ', $example->@*;
my $output = join ', ', @output;
say <<~"END";
Input: \@ints = ($input)
Output: ($output)
END
}
sub smaller_than (@ints) {
return map {
my $i = $_;
scalar grep { $_ < $i } @ints;
} @ints;
}
$ ./ch-1.pl
Input: @ints = (5, 2, 1, 6)
Output: (2, 1, 0, 3)
Input: @ints = (1, 2, 0, 3)
Output: (1, 2, 0, 3)
Input: @ints = (0, 1)
Output: (0, 1)
Input: @ints = (9, 4, 9, 2)
Output: (2, 1, 2, 0)
Submitted by: Ali Moradi
Given a matrix M, check whether the matrix is in reduced row echelon form.A matrix must have the following properties to be in reduced row echelon form:
- If a row does not consist entirely of zeros, then the first nonzero number in the row is a 1. We call this the leading 1.
- If there are any rows that consist entirely of zeros, then they are grouped together at the bottom of the matrix.
- In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs farther to the right than the leading 1 in the higher row.
- Each column that contains a leading 1 has zeros everywhere else in that column.
For more information check out this wikipedia article.
This is the tough one.
Because we always want to display the matrices, I pulled pad() and format_matrix() from previous solution.
Each of the four requirements gets a new test, and if the matrix fails that test, it returns 0 for failure. At the end of the function, it returns 1.
As usual, I use functions from List::Util. max for pad, of course, but also first. I use it here to get the first index of a row that matches the requrement, being the value equalling 1, with first { $matrix->[$i][$_] != 0 } 0 .. -1 + scalar @row. I use a lot of the functional tools, like scalar and grep and map, but not exclusively.
1, then determine if it’s a leading 1 by looking for non-zero values in the row it’s in. if it is, set the current position into the column array for zero, check for non-zero characters, and fail if there are.#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
use List::Util qw{ first max };
my @examples = (
[ [ 1, 1, 0 ], [ 0, 1, 0 ], [ 0, 0, 0 ] ],
[
[ 0, 1, -2, 0, 1 ],
[ 0, 0, 0, 1, 3 ],
[ 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 0, 0 ]
],
[ [ 1, 0, 0, 4 ], [ 0, 1, 0, 7 ], [ 0, 0, 1, -1 ] ],
[
[ 0, 1, -2, 0, 1 ],
[ 0, 0, 0, 0, 0 ],
[ 0, 0, 0, 1, 3 ],
[ 0, 0, 0, 0, 0 ]
],
[ [ 0, 1, 0 ], [ 1, 0, 0 ], [ 0, 0, 0 ] ],
[ [ 4, 0, 0, 0 ], [ 0, 1, 0, 7 ], [ 0, 0, 1, -1 ] ]
);
for my $example (@examples) {
my $output = reduced_row_eschelon($example);
my $input = format_matrix($example);
state $i = 0;
$i++;
say <<~"END";
Example $i
Input: \$M = $input
Output: $output
END
}
sub reduced_row_eschelon ($matrix) {
my @is_nonzero_row;
for my $i ( 0 .. -1 + scalar $matrix->@* ) {
my @row = $matrix->[$i]->@*;
# 1. If a row does not consist entirely of zeros, then the first
# nonzero number in the row is a 1. We call this the leading 1.
my @t1 = grep { $_ != 0 } @row;
if ( scalar @t1 ) {
return 0 unless $t1[0] == 1;
}
# 2. If there are any rows that consist entirely of zeros, then
# they are grouped together at the bottom of the matrix.
if ( !scalar @t1 ) {
for my $j ( $i .. -1 + scalar $matrix->@* ) {
my $count = scalar grep { $_ ne 0 } $matrix->[$j]->@*;
return 0 if $count;
}
}
# 3. In any two successive rows that do not consist entirely of zeros,
# the leading 1 in the lower row occurs farther to the right than
# the leading 1 in the higher row.
$is_nonzero_row[$i] = scalar @t1 ? 1 : 0;
if ( $i > 0 && $is_nonzero_row[$i] && $is_nonzero_row[ $i - 1 ] ) {
my $curr =
first { $matrix->[$i][$_] != 0 } 0 .. -1 + scalar @row;
my $prev =
first { $matrix->[ $i - 1 ][$_] != 0 } 0 .. -1 + scalar @row;
return 0 unless $curr > $prev;
}
}
# 4. Each column that contains a leading 1 has zeros everywhere else
# in that column.
for my $i ( 0 .. -1 + scalar $matrix->[0]->@* ) {
# 1. get the column
my @col = map { $matrix->[$_][$i] } 0 .. -1 + scalar $matrix->@*;
# 2. find the 1, determine if it's a leading 1 by checking that row
if ( grep { $_ == 1 } @col ) {
# for each 1
my @ones = grep { 1 == $col[$_] } 0 .. -1 + scalar @col;
for my $j (@ones) {
my @row = $matrix->[$j]->@*;
my @sub = @row[ 0 .. $i - 1 ];
my $leading = ( 0 == grep { $_ != 0 } @sub ) ? 1 : 0;
if ($leading) {
$col[$j] = 0;
my $zero_count = scalar grep { $_ ne 0 } @col;
return 0 if $zero_count;
}
}
}
}
# say format_matrix($matrix);
return 1;
}
sub format_matrix ($matrix) {
my $maxlen = max map { length $_ } map { $_->@* } $matrix->@*;
my $output = join "\n ", '[', (
map { qq{ [$_],} } map {
join ',',
map { pad( $_, 1 + $maxlen ) }
$_->@*
} map { $matrix->[$_] } 0 .. -1 + scalar $matrix->@*
),
']';
return $output;
}
sub pad ( $str, $len = 4 ) { return sprintf "%${len}s", $str; }
$ ./ch-2.pl
Example 1
Input: $M = [
[ 1, 1, 0],
[ 0, 1, 0],
[ 0, 0, 0],
]
Output: 0
Example 2
Input: $M = [
[ 0, 1, -2, 0, 1],
[ 0, 0, 0, 1, 3],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
]
Output: 1
Example 3
Input: $M = [
[ 1, 0, 0, 4],
[ 0, 1, 0, 7],
[ 0, 0, 1, -1],
]
Output: 1
Example 4
Input: $M = [
[ 0, 1, -2, 0, 1],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 1, 3],
[ 0, 0, 0, 0, 0],
]
Output: 0
Example 5
Input: $M = [
[ 0, 1, 0],
[ 1, 0, 0],
[ 0, 0, 0],
]
Output: 0
Example 6
Input: $M = [
[ 4, 0, 0, 0],
[ 0, 1, 0, 7],
[ 0, 0, 1, -1],
]
Output: 0
256 is also the area code of Huntsville, Alabama, a fact that must amuse some rocket scientists.
Submitted by: Mohammad Sajid Anwar You are given an array of distinct words, @words.
Write a script to find the maximum pairs in the given array. The words $words[i] and $words[j] can be a pair one is reverse of the other.
So, we’re given an array of words. In the example cases, they’re all two-letter words. A pair is when two words, when sorted, are the same. pw and wp would be a pair, because they’re both wp.
I use map in a void context again, instead of a for loop, splitting and sorting and joining each word, then use map { $hash{$_}++} to count all the individual munged words.
So, we have a pair when $hash{$munge} > 1, so I grep for that, and use scalar to get the count of what passes.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
my @examples = (
[ "ab", "de", "ed", "bc" ],
[ "aa", "ba", "cd", "ed" ],
[ "uv", "qp", "st", "vu", "mn", "pq" ],
);
for my $example (@examples) {
my $input = join ', ', map { qq {"$_"} } $example->@*;
my $output = maximum_pairs( $example->@* );
say <<~"END";
Input: \@words = ($input)
Output: $output
END
}
sub maximum_pairs (@input) {
my %hash;
map {
my $munge = join '', sort split //, $_;
$hash{$munge}++
} @input;
return scalar grep { $_ > 1 } values %hash;
}
$ ./ch-1.pl
Input: @words = ("ab", "de", "ed", "bc")
Output: 1
Input: @words = ("aa", "ba", "cd", "ed")
Output: 0
Input: @words = ("uv", "qp", "st", "vu", "mn", "pq")
Output: 2
Submitted by: Mohammad Sajid Anwar You are given two strings, $str1 and $str2.
Write a script to merge the given strings by adding in alternative order starting with the first string. If a string is longer than the other then append the remaining at the end.
Normally, I would want to split both into arrays, then push the output into an array, one character at a time. I decided to do this with strings instead.
While there’s both $string and $string2, I use substr to add the first characters to the output, then remove both first characters. I do this by using substr as both an lvalue, capable of being written to, and an rvalue, capable of being read from. That’s so very useful.
Once one string or the other is empty, whe stop the while loop and join the remaining string to the output. Thing is, if either $string1 or $string2 is empty, $output . $string1 . $string2 is equivalent to $output . $string2 . $string1, so returning the concatenated string finishes the job with no array-related functions like join.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
my @examples = (
[ "abcd", "1234" ],
[ "abc", "12345" ],
[ "abcde", "123" ],
);
for my $example (@examples) {
my $output = merge_strings( $example->@* );
my $p = $example->[0];
my $w = $example->[1];
say <<~"END";
Input: \$str1 = "$p", \$str2 = "$w"
Output: "$output"
END
}
sub merge_strings ( $str1, $str2 ) {
my $output;
while ( length $str1 && length $str2 ) {
$output .= substr( $str1, 0, 1 ) . substr( $str2, 0, 1 );
substr( $str1, 0, 1 ) = '';
substr( $str2, 0, 1 ) = '';
}
return $output . $str1 . $str2;
}
$ ./ch-2.pl
Input: $str1 = "abcd", $str2 = "1234"
Output: "a1b2c3d4"
Input: $str1 = "abc", $str2 = "12345"
Output: "a1b2c345"
Input: $str1 = "abcde", $str2 = "123"
Output: "a1b2c3de"
Submitted by: Mohammad Sajid Anwar
You are given two strings,
$sand$t. The string$tis generated using the shuffled characters of the string$swith an additional character.Write a script to find the additional character in the string
$t..
I wanted to use List::Compare. It’s a good module and is worth pushing, but…
OK, there could be a way to use List::Compare to do this, but I don’t know it off the top of my head. The problem is with Perl and Preel, and the e. List::Compare saw that both sides being compared had an e and that was good enough.
But that’s OK, because it gave me a license to hack. Both strings were split into arrays and sorted, so that the equivalent letters come out. "Perl" becomes ["P", "e", "l", "r"] and "Preel" becomes ["P", "e", "e", "l", "r"].
We then compare the arrays, one element at a time. I do it destructively, poping the elements, rather than keeping indexes, but that would work too. If the characters are the same, pop both. Else, if one array is longer than the other, pop that and put it in the output. By the rules and examples, the second word should be longer, but this code handles both cases.
(Remember the X-Files movie? Early in it, after the inciting incident happens and the conspiracy people show up, one character, Bronschweig, says this line: “Sir, the impossible scenario we never planned for? Well, we better come up with a plan.” There are cases in if statements that should not happen, like a case where the arrays don’t start with the same character but neither is longer than the other. I always try to reference that line when writing an “impossible” case.)
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
use Carp;
use List::Compare;
my @examples = (
{ s => "Perl", t => "Preel" },
{ s => "Weekly", t => "Weeakly" },
{ s => "Box", t => "Boxy" },
);
for my $example (@examples) {
my $output = odd_character($example);
my $s = $example->{s};
my $t = $example->{t};
say <<~"END";
Input: \$s = "$s" \$t = "$t"
Output: $output
END
}
sub odd_character ($input) {
my @s = sort split //, $input->{s};
my @t = sort split //, $input->{t};
my @output;
while ( @s && @t ) {
if ( $s[0] eq $t[0] ) {
shift @s;
shift @t;
}
else {
if ( scalar @s > scalar @t ) {
push @output, shift @s;
}
elsif ( scalar @s < scalar @t ) {
push @output, shift @t;
}
else { croak 'Impossible Scenario' }
}
}
push @output, @s if @s;
push @output, @t if @t;
return shift @output;
}
$ ./ch-1.pl
Input: $s = "Perl" $t = "Preel"
Output: e
Input: $s = "Weekly" $t = "Weeakly"
Output: a
Input: $s = "Box" $t = "Boxy"
Output: y
Submitted by: Mohammad Sajid Anwar
You are given a paragraph$pand a banned word$w.Write a script to return the most frequent word that is not banned.
I have done something and I feel no guilt about it.
I have committed (and now have published) functional code that uses map but doesn’t fill an array. I have used it as a loop. map { $hash{$_}++ } 1..10 wastes the result, but that would just be ten 1s. I have seen people arguing that it’s bad form, but besides maybe being slower (I could Benchmark it to know for sure), I don’t see any solid reason why the functional technique is worse than loops.
But anyway, we split on one or more non-word characters /\W+/, which has the possibility of breaking "isn't" into "isn" and "t", but the examples contain no contractions.
There’s a thing that’s ambiguous to me. Second example has the as the blocked word, but it isn’t clear if The would also be blocked. That the correct output is Perl and not perl indicates to me that code folding isn’t part of the solution. But while I’m mentioning that I’m not doing it, I think I should explain. We would previously write lc $x eq lc $y to compare to strings without bothering with case, but lc and uc don’t affect Unicode, which is increasingly common in text. fc works with Unicode.
Anyway, we use grep to ensure that the blocked word doesn’t get considered, and then map { $hash{$_} ++ } as discussed previously, to count each word. Once we’re done, we can use max from the perrenial favorite, List::Util, on the hash. keys gives a list of keys to the hash, but values gives you access to the count, and thus we get the highest count. grep {$hash{$_} == $max} gives us a list of words that make that high count (presumably a list of one element), and then return shift @output gives us the first (only) entry and the correct solution.
If abusing functional techniques is wrong, I don’t want to be right.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state fc };
use List::Util qw{ max };
my @examples = (
{
paragraph =>
"Joe hit a ball, the hit ball flew far after it was hit.",
word => "hit",
},
{
paragraph =>
"Perl and Raku belong to the same family. Perl is the most popular language in the weekly challenge.",
word => "the",
}
);
for my $example (@examples) {
my $output = most_frequent_word($example);
my $p = $example->{paragraph};
my $w = $example->{word};
say <<~"END";
Input: \$p = "$p"
\$w = "$w"
Output: "$output"
END
}
sub most_frequent_word ($obj) {
my $paragraph = $obj->{paragraph};
my $banned_word = $obj->{word};
my %hash;
# some people REALLY hate map being used in this way, believing
# that it should end in (start with) @array = , but clearly,
map { $hash{$_}++ }
grep { $_ ne $banned_word }
split /\W+/, $paragraph;
my $max = max values %hash;
my @output =
grep { $hash{$_} == $max } keys %hash;
return shift @output;
}
$ ./ch-2.pl
Input: $p = "Joe hit a ball, the hit ball flew far after it was hit."
$w = "hit"
Output: "ball"
Input: $p = "Perl and Raku belong to the same family. Perl is the most popular language in the weekly challenge."
$w = "the"
Output: "Perl"
254 is also the area code for Waco, Texas.
Submitted by: Mohammad S Anwar
You are given a positive integer,$n.Write a script to return true if the given integer is a power of three otherwise return false.
So, cube roots are easy. $n ** 3 gets you the cube, and $n ** 1/3 gets you the cube root.
But it isn’t that simple, because everything is a power of three if you get beyond whole numbers. And that’s the core of my test. $n == int $n. If we cast as an integer, wiping away the floating point, are they still equal?
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
my @examples = ( 27, 0, 6 );
for my $example (@examples) {
my $output = three_power($example);
say $output;
say <<~"END";
Input: \$n = $example
Output: $output
END
}
sub three_power ($input) {
my $cuberoot= $input ** (1/3);
return ( $cuberoot == int $cuberoot ) ? 'true' : 'false';
}
$ ./ch-1.pl
true
Input: $n = 27
Output: true
true
Input: $n = 0
Output: true
false
Input: $n = 6
Output: false
Submitted by: Mohammad S Anwar
You are given a string,$s.Write a script to reverse all the vowels (a, e, i, o, u) in the given string.
We’ll want a list of all the vowels in the string, so we’ll break apart the string (split //, $string), collect the vowels ( grep {/[aeiou]/mix}), make them all lowercase for convenience (map {lc}) and reverse them, so they’re in the right order when we start to replace.
Which we do with substr, which works as both an lvalue (substr($string,1,1) = $char) and an rvalue ($char = substr($string,1,1)). We loop through an index value for all characters, testing if it’s a vowel (again with /[aeiou]/mix), converting to the case of the letter it replaces ($n = uc $n if $c eq uc $c), then replacing (substr( $string, $i, 1 ) = $n).
I’ll point out that both Perl from the examples and weekly challenge come out the same, because their vowels are palindromic: e and eeaee and the case may be, so e replaced e. They’re totally different es, I swear.
#!/usr/bin/env perl
use strict;
use warnings;
use experimental qw{ say postderef signatures state };
use List::Util qw{ max sum0 };
my @examples = ( "Raku", "Perl", "Julia", "Uiua", "Dave", 'signatures', 'weekly challenge' );
for my $example (@examples) {
my $output = reverse_vowels($example);
say <<~"END";
Input: \$s = "$example"
Output: "$output"
END
}
sub reverse_vowels ($string) {
my @vowels =
reverse
map { lc }
grep { /[aeiou]/mix }
split //, $string;
for my $i ( 0 .. -1 + length $string ) {
my $c = substr( $string, $i, 1 );
my $v = $c =~ /[aeiou]/mix ? 1 : 0;
if ( $c =~ /[aeiou]/mix ) {
my $n = shift @vowels;
$n = uc $n if $c eq uc $c;
substr( $string, $i, 1 ) = $n;
}
}
return $string;
}
$ ./ch-2.pl
Input: $s = "Raku"
Output: "Ruka"
Input: $s = "Perl"
Output: "Perl"
Input: $s = "Julia"
Output: "Jaliu"
Input: $s = "Uiua"
Output: "Auiu"
Input: $s = "Dave"
Output: "Deva"
Input: $s = "signatures"
Output: "segnutaris"
Input: $s = "weekly challenge"
Output: "weekly challenge"