((lambda (x) (x x)) (lambda (x) (x x)))

Friday, December 5, 2008

Func Users Module v0.3 is released!

A new version of the func users & groups management module has been released for public user and/or review. The updated module can be found here: Func Users Module v0.3.

This release includes some coding style alterations to the already extant informational methods, as well as a boatload of new methods that will allow you to:

- Create, delete, and modify groups.
- Create, delete, and modify users.
- Manage the memberships of users within groups

To do:

The current implementation of user_add has very few options, and incorporating these options is most of what's on the feature list for v0.4. In the mean time, most sorts of users you would like to create can be created by first creating a user with relatively default settings using user_add, and subsequently modifying any properties you'd like changed. It would be nice if these additional steps could be cut out and the user be instantiated with the desired options with one method call.

Enjoy!

Tuesday, December 2, 2008

LPT730 Assignment #2

LPT Assignment 2: Converting a DV video to distribution formats using open source tools on Linux


Step 1: Exporting the video from Quicktime to DVD:



First, I took the provided file and converted it to DV. This was accomplished by loading the file in Kino and selecting 'export' on the vertical tab bar, followed by 'DV file' on the horizontal tab bar, and then proceeding to hit the 'export' button and select the desired output filename.


Step 2: Improving the audio quality:



The provided audio file had several problems - it was recorded in stereo, but it appears to actually be two mics, one worn by the speaker, and one worn by an announcer who speaks only briefly at the beginning. To correct this, I used Kino to export the audio track from the DV file and loaded it up in Audacity. In eh editor, I amplified the brief period of conversation from the announcer, mixed it over the speaker's track in the other channel, and then copied that channel over both, creating a mono mix where both speakers are clearly visible. Noise reduction was then applied to reduce some of the ambient noise, after which the file was normalized to 0 db.




Step 3: Replacing the audio in the original DV file with the corrected audio



To replace the audio with our new, cleaned up version of the track, I launched Cinelerra, imported both the video from the DV file and the audio file produced by audacity and selected the 'render' option from the file menu to produce a new DV file with the new audio.


Step 4: Transcoding to Theora



To transcode the DV file to theora, I install the ffmpeg2theora package and it's dependencies and, after reading the man page to determine the syntax, ran the script shown above. This actually produces many more files than we need at a variety of quality settings. To select the final files, each produced file was checked with the 'ogginfo' command and those with bitrates closest to the desired bitrates were selected.


Step 5: Transcoding to MPEG2



To transcode the video to MPEG2, the following commands are saved to a file, 'trans', and then executed:

transcode -i Out.dv -o out.avi -V -J smartyuv,hqdn3d,msharpen -Z 640x480 -y null -w 1800 -b 256 -R 1;
transcode -i Out.dv -o out.avi -V -J smartyuv,hqdn3d,msharpen -Z 640x480 -y -w 1800 -b 256 -R 2


Step 6: Transcoding to H264



To transcode the video to H264, I installed Mark Pilgrim's podencode python script and ran it on the AVI file (unfortunately, I could find no working tool to convert this directly from the DV file).

Saturday, November 22, 2008

Virt-manager Lab

So, I was finally able to find some time to complete the FreeBSD virtual machine lab for SYA710. I wasn't able to do it in the lab, as I wasn't able to get it done while on campus, so ended up doing the lab using our China machine remotely. This didn't really require much serious modification to the lab - the only change was the use of a regular file for the main disk rather than a physical disk, as there weren't any partitions to spare on China.

After a few initial failed attempts to install the ports collection from an FTP server, I ended up redownloading the ISO images and using those to get the ports collection installed. Getting X and a copy of the emacs text editor running was relatively simple, though compiling emacs on FreeBSD turned out to be a startlingly lengthy process.



I'm quite impressed with the capabilities of the virsh and virt-manager pairing alongside KVM. The ability to have virtual machines start on boot certainly is a handy one to have around.

Tuesday, November 18, 2008

SPR720 Number Game Lab

So, I just finished the number guessing game for SPR720... most of the conceptual material was pretty old hat, but it provided a convenient excuse to familiarize myself with a little bit more of the Python syntax surrounding classes, initialization of objects, custom exceptions, etc.

Without further ado, here's the code...

Clicky

LPT730 Lab #7 -- Magnet Links vs. Torrents

Magnet Links vs. Torrents

A torrent file is a small file which can be downloaded and, by providing the torrent to a compatible client (Azureus, uTorrent, Transmission, etc.), then used to download the file described by the torrent file in question. In the scheme used by torrents, the file to be distributed is segmented into a number of pieces, to each of which is applied the SHA1 hashing algorithm. The hash value for each piece is stored within the torrent file, and it is this series of hash values that is used to identify, recover, and reassemble the segments described.

Magnet links are an open standard for a URI scheme accomplishing a similar goal through slightly different means. In this case, the user is offered a specially formatted URI containing a hash value that will be used to identify the correct file. The idea behind a magnet link is that some external P2P program (such as Vuze, Bearshare, or DC++, among others) will be used to locate the file whose hash corresponds with that in the URI. Magnet links may use a variety of hash algorithms, but unlike torrent files, the file is not segmented before being hashed, and it is the entirety of the file that is used in the production of the hash value used.

Even before any technical factors are considered, the advantage rests solidly with the BitTorrent protocol, due largely to it's popularity among the general public: the use of torrent files to acquire a variety of forms of media has become popular even among non-technical users, and as such, the chances of a given reciever having the required software installed to handle a torrent file are higher than the chances of them having software compatible with the magnet link scheme.

When technical aspects are considered, the advantages posessed by the BitTorrent profile become even more apparent. The technique used by BitTorrent of segmenting files before hashing them means that a peer node need not possess the entirety of the desired file to begin providing it to a requesting client: posessing only some segments of the file in question, the peer node will nonetheless register a match when the hash of the requested file is presented to it, and can immediately begin transfer of the segment it posesses to the requesting client.

Secondly, the use of BitTorrent continues to make things simpler for the distributer as it can be guaranteed that clients seeking to access their file will be making use of a particular protocol (specifically, BitTorrent), unlike in the magnet link scheme (in which a client may be using any of a number of protocols to transfer the file in question). For the distributor, that makes it easier to reinforce the group of seeders with nodes under their own control simply by configuring them to seed the file over the BitTorrent protocol: there is no need to create a set of seed nodes for each of the possible protocols a client may turn out to be using.

To conlcude, I believe this one to be a clear win for BitTorrent.

Friday, November 14, 2008

Write Perl? You want this module.

The other day, I discovered a great, easy-to-use profiling tool that anyone who writes Perl is bound to love. The tool is called Devel::NYTProf, and you'd definately be doing yourself a favour by checking it out. It'll provide you with beautifully readable, comprehensive reports (in a variety of formats - I haven't tried them all yet, but so far I like HTML) on Perl programs of your choice, in which you'll find all sorts of useful information about what your program actually spends it's time doing.


To get ahold of it, just run the following command:

'perl -MCPAN -e 'install Devel::NYTProf''


To try it out on a Perl program of your own (let's say it's called 'example.pl'), just run...

'perl -d:NYTProf example.pl'


This will result in the creation of a file called 'nytprof.out' in your current directory. You can output a human readable HTML version of the report using a command like:

'nytprofhtml nytprof.out'


The 'nytprofhtml' command will have produced a directory called 'nytprof' in your current directory containing the report in an HTML format. You can open the report in your web browser of choice with the following command:

'xdg-open nytprof/index.html'


If you'd like to see an example of the sort of output it produces, you can check out the output it gave while profiling some of my code here:

'http://matrix.senecac.on.ca/~gjmasseau/nytprof-example/index.html'


From here, you can navigate through the report to view all sorts of useful information to help you optimize your program.


Until next we speak, happy profiling!

Monday, November 10, 2008

Func Users Module v0.2 is released.

So, after some brief discussion with alikens and showing him the work that has been done so far, I've decided that version 0.2 of the Func users and groups module is now released. It's been mailed to the project lead, submitted on the mailing list and is available for general download here

Features on the list for version 0.3:
- User creation.
- Group creation.
- Deletion of either of the above.
- Managing group membership of users.

Possible for v0.3, but maybe in 0.4 depending on time:
- Implementation of the remainder of the possibilities offered by usermod and groupmod.

At this rate, the users and groups module should probably be completed well before the end of the year - in that case, I'm going to take a look at what other holes remain in the module selection and see what looks most interesting. Maybe the crontab module, as it'd give me an opportunity to fiddle with mangling files from Python in more detail.

Sunday, October 19, 2008

SYA 710 Kernel Compile Lab

It's a week or two late - I've been busy with the Func project, to the exclusion of much else - but after a few attempts the kernel compilation lab is done. I grabbed the freshest 2.6.27 off kernel.org, grabbed the required packages, and off I went to spin up a new kernel. There were some initial problems with one step in the compilation of some particular component ('no rule to make target 'n/n' on a file 'ngen.so''), but a bit of Googling quickly turned up a solution. The solution (which involved simple answering a single question about external firmware loaders differently during 'make menuconfig') was also from a Fedora Core 9 user, so perhaps there's a bug somewhere there that neds to be worked out. One of the later commands in the course notes regarding kernel compilation, the syntax offered for 'mkinitrd' in particular, did not work for me, and I've added a note with the syntax that did to this page of the wiki.

Thursday, October 9, 2008

Byebye badware.

Woot. Just finished removing everything related to Mono from my main Ubuntu system. Feels good.

Wednesday, October 8, 2008

Func Project Report 1

So, after some time of thought, high-level study, and waiting for decisions to be made regarding exactly who's writing exactly what, real work on actually getting things done with Func seems to have begun in earnest.

Chris Tyler cooked up and cloned a small handful of Fedora VMs to use for our development and testing network, and so tonight I've taken to setting them up and trying to get all the pieces in place for a working Func environment - luckily, there don't seem to be too many. Installing Func itself was a breeze, but upon moving to actually setting them up to interact with one another it seems they expect a working DNS system around them - so, my current task is setting up bind. Luckily, that's something I've done a few times, so doing it again should only be a brief distraction from setting up Func itself.

I'm still trying to get a firmer grasp on the Python language itself; while most of the concepts are familiar ones, I'm finding the actual syntax to be somewhat unappealing - to my eye, at least, it just doesn't read very well, and the flow seems somewhat unnatural to me. Nevertheless, while it seems unlikely that Python will become my language of choice, it's nothing a bit more time and practice won't solve.

I've been skimming the actual Func code, the modules in particular, and while I haven't quite attained complete understanding of all the intricacies yet, my grasp is growing better the more I read.

Back to bind.conf for me.

Sunday, October 5, 2008

SPR720 Packaging Lab

So I've just completed the SPR720 packaging lab. For my package, I'd chosen Csound, a C-like language for audio synthesis. The .spec file required some modification to be suitable, as Csound uses a Python-based build system called 'scons' and an accompanying 'install.py' script rather than the usual './configure && make' routine, but this part was fairly minor. Some additional modifications were required to compensate for Csound's tendency to, by default, place several of it's executables in '/bin' instead of '/usr/bin'.

All in all, the process wasn't unusually dificult, but it might be nice if the documentation were given a bit more polish - maybe a seperate, more rudimentary tutorial on the procedures with more examples to accompany the main document.

The RPM files produced can be found here here.

Monday, September 29, 2008

SPR Build Lab Notes

For the build lab, I downloaded the latest version of Csound (a C-like language for musical composition and synthesis in which source files compile to either audio or MIDI output files).

Building the package was a little unusual, but not to dificult: rather than the standard configure and make set, the package had to be build using a system called 'scons'. After resolving this dependency, as well as acquiring the development headers for libsndfile, the build and install proceeded flawlessly.

LPT730 Lab #4 Reflections.

The chart creation tools provided in OpenOffice.org's Calc application, are, to be frank, terrible. The interface is unintuitive and the output is unattractive at best. While the OpenOffice.org suite is certainly strong in other areas, in my opinion this particular feature is one that could use quite a bit of work.

The creation of man pages is, as usual, a fairly straight forwards procedure, so nothing really new there. The only item of interest is what appears to be a slight bug in the software for which the man page was being written, which has been noted in the resulting man page and will hopefully be corrected.

Monday, September 22, 2008

Phishing and the Robots Exclusion Standard

Phishing:

'Phishing' is a term used to describe a variety of techniques for perpetuating criminal activities via the Internet by means of techniques revolving around the core mechanism of producing content intended to first decieve the consumer of said content into believing the content creator to be a trusted party and then proceeding to create a plausible motivation for the content consumer to take a course of action resulting in private information belonging to the content consumer being transmitted to the content creator. The type of personal information sought most often is banking or financial information such as bank account or credit card numbers. Common trusted parties that a phishing content creator may try to impersonate will often include (but are certainly not limited to) banks, insurance agencies or other financial institutions. The most common means of delivering the deceptive conent to content consumers is usually by means of an email message inviting the user to either reply to the email with one containing personal information, or to click on a link bringing them to a similarily deceptive website into which personal information may be entered. These are not the only means used, however, and email is not always involved - another popular technique is to purchase domain names closely resembling that of a trusted entity but with single letter typos, in the hopes of decieving unwary web surfers who may accidentally input the incorrect address into their browser and thereby come across the false site.

The best defense an individual user may take to avoid falling prey to such a scheme is unfortunately not a technical one that can be simply installed on their computer and trusted to do it's work, but simply perceptiveness on the part of the user. A user should take a critical eye towards any online communications with institutions likely to be impersonated. The deceptive content produced by phishers is often of less than sterling quality, and a careful observer is likely to take note of many minor errors in the text of the deceptive content. If the user has any suspicions regarding the veracity of the communication, they should immediately contact the trusted party by another channel, using contact information acquired previousl and known to be correct, and seek confirmation of whether the communication recieved was authentic.

Numerous technical measures exist with the goal of making it easier for users to determine whether a suspect email message or website is legitimate, but the problem is at it's core not a technical one, and as such no purely technical solution is truly able to solve the problem. Educating users as to how to identify false communications is the only method likely to have any impact.


Robot Exclusion Standard:

The Robot Exclusion Standard is an informal convention that has been adopted by many webmasters and search engine operators to allow webmasters to determine which pages on a particular website web robots - tools used by search providers to discover and index pages on the World Wide Web - will be allowed to index. This convention states that by including a specially formatted text file named robots.txt in the top directory of a web site, files and directories specified in this file will not be indexed by web robots complying with the standard.

The biggest weakness is that as this is an informal convention there is no particular requirement that any given piece of web robot software obey the standard, and providers who wish to disobey the standard are as such able to do so largely at their leisure, with no particular legal countermeasure possible to discourage such behaviour.

An example robots.txt configuration, taken from my personal home web server, is as follows:

User-agent: *
Disallow: /

When the robots.txt file is configured as above, all cooperating web robots will refrain from indexing any content on the web site in question.


References:
(This section left blank as no published articles or other sources of information were referenced during the writing of this article.)

SPR720 Lab #3

Most of the material in this particular lab was already familiar to me - most of it, from personal experience and usage, and a little that I picked up in OPS435. I'm honestly pretty confident in my bash scripting skills, and while it's definitely not my language of choice for most purposes these days it can still be useful for creating quick and dirty, usually temporary solutions to small obstacles blocking the route to a solution to some larger problem. For review, I completed the following scripts from the list of suggestions accompanying the lab:

Count the number of users with user IDs between 500 and 10000 on the system (user IDs are the third field in /etc/passwd).
Count the number of files in the user's home directory which are not readable.
For each directory in the $PATH, display the number of executable files in that directory.

The programs solving these problems can be found here.

Tuesday, September 16, 2008

NAD710 Lab #2 Reflections

Lab two for this course was at least a little more interesting, as I got to encounter at least one new tool this time. While a lot of the tools used, like ifconfig, the arp commands, etc., were already familiar to me due to either personal use or previous experience in other courses, the 'ip' tool for configuring the network card was something I hadn't used before. The 'ip' tool's syntax reminds me of the style of syntax used for configuring network interfaces on Cisco router's, though it's not quite identical in a few places. 'tcpdump' was also new - I've used tools with similar functionality from the GUI, but it's always preferable to have a command-line version if possible, so I'll probably be playing with this tool a bit more in my spare time.

Friday, September 12, 2008

SYA710 Lab 01 Thoughts

SYA710 Lab 01 has been submitted, and mostly felt like review for me. A lot of the material covered in this particular lab bears a good resemblence to things I've done in some other classes, and I'd encountered most of the more basic file and disk utilities shown for several years on my home systems even before then. While I don't have quite as much experience with LVM as with the other tools listed, I'm pretty confident with that toolset as well -- it's always good to review though. Most of my multi-disk setups at home revolve around the md raid system rather than LVM, and learning alternative/complimentary tools is always useful -- maybe next time one of the home servers gets rebuilt I'll layer LVM with md, just for fun.

LPT730 Lab01

Software Patents

I do not believe that software patents are a good idea in general, at least not in the form in which they currently exist in most countries.

A large part of the dificulty comes from the fact that software patents are dificult for usual patent office employees to evaluate effectively, and as such a disturbingly high number of approved patents have been granted that, most likely, should not have been. When evaluating a patent application for a software practice, patent reviewers do not have time to learn an entire new discipline, and patent reviewers are typically not software developers and are unlikely to fully understand the material contained within the patent application. As such, many 'bad' software patents exist. Usually, the element making these patents 'bad' is that many times patents have been granted for broadly defined techniques or strategies, rather than merely for a particular specific implementation -- and, as per normal practices in the granting of patents, patents are intended to apply only to particular implementations or methods of accomplishing a goal.

While I don't necessarily see anything wrong with patenting a particular, specific implementation for achieving a particular goal in software, the cost of the patent office retaining evaluators knowledgeable enough to effectively evaluate these patens is one that most governments are unlikely to be willing to pay. Needless to say, if the patent office was handling the granting of software patents in an effective manner, these patents would become far less valuable to companies, as the chances of recieving a broad patent that one could use to litigate their way to riches would be far less likely.

Bill C-61

While the odds of Bill C-61 becoming law in Canada currently appear to be low, I would still be in favour of such a law, if not for the reasons one might think.

Bill C-61 would, if passed, imply many onerous new restrictions on how an end user may make use of copyrighted content they own, mostly in the area of making the circumvention of technical measures intended to prevent the copying of data an illegal act.

Many in the open source community are up in arms over the restrictions on users' freedom that this bill would imply, being as usual all for the freedom to do what they wish with information in their posesssion. I would suggesst, however, that they look at things in a different light: what Bill C-61 really does is make using, posessing, or acquiring copyrighted content distributed under a proprietary liscence far less attractive than it was previously. The more careful the user has to be with handling the information, the more they have to worry about accidentally doing something illegal with it and not realizing it until it is too late, the less they will want to have anything to do with this dangerous, legally risky proprietary media.

Sunday, September 7, 2008

LPT72 Lab #00 - Two useful applications to get your podcasts.

LPT730 Lab #00 - Two Applications

The two applications I am going to talk about today are ones that may interest students interested a client they can use to acquire the podcasts mentioned in my blog entry here. These two applications are Rhythmbox and bashpodder.

Both of these applications are podcasting clients ('podcatchers'), but they differ in their approach, so some people might prefer one over the other. The biggest difference between these two applications is that Rhythmbox is a graphical (GUI) application while Bashpodder is a non-graphical (CLI) application. The installation examples provided for Bashpodder assume you are running Ubuntu Linux, but will likely not require much alteration for other distributions.



Rhythmbox is not only a podcatcher, but a full fledged audio player capable of playing audio in any format for which a gstreamer codec exists, including popular formats like WAV, AAC, MP3, FLAC, OGG and WMV. Rhythmbox presents the user with a typical 4 pane interface that will be familiar to users of other clients such as iTunes.

To subscribe to a podcast from within Rhythmbox, simply right click on the 'Podcasts' item in the leftmost pane and click 'New Podcast Feed' in the context menu that appears. This will present you with a dialog in which you can enter the address of the RSS feed for the podcast you which to subscribe to. Once you have done this, new episodes will be periodically downloaded automatically, or you can initiate a check for new episodes manually by accessing the same context menu on the 'Podcasts' item in the leftmost pane and selecting 'Update All Feeds'.



The second client, Bashpodder, is a command line application: in fact, it is a simple bash script taking up less than a page of printed space. It provides no visible output whatsoever, and so the screenshot provided is of the program's actual code. Nevertheless, it is a fully functional, lightweight and elegant solution for updating your podcast selection. This one is a little trickier to get going since there probably isn't a package for your distribution, but by the standards of command like applications it's on the easier end: simply download two files from their website (bashpodder.shell and parse_enclosure.xsl), place them in the directory you'd like to store them in (like, for instance '~/bin/'), and link the main file somewhere in your path, using a command like: 'sudo ln -s ~/bin/bashpodder.shell /usr/local/bin/bashposser'. Now, create a file named 'bp.conf' in the folder you put bashpodder.shell in and place the addresses of the RSS feeds for the podcasts you wish to keep up to date on in this file, one per line.

Now, typing 'bashpodder' will download any episodes from these feeds you have not downloaded before into folders labelled by date. By default these folders are created in the same folder bashpodder is run from, but you can easily modify a single line in the bashpodder.shell file to specify a specific destination directory.

The one disadvantage is that by default, Bashpodder, provides no way to automatically update the podcasts you have subscribed to. This is quite easy to fix, however: simply run the command 'ln -s ~/bin/bashpodder.shell /etc/cron.hourly/bashpodder' and the crond scheduling daemon will automatically run the program once an hour, easily solving the problem of keeping your podcasts up to date.

SPR720 Command Lab

The system used for this lab is running Ubuntu 8.04 with a handful of extra packages installed.

Blog about your experience and what you've found.

1. Examine the /bin, /usr/bin/, /sbin, and /usr/sbin directories. For each directory, examine the number and type of commands (use ls|wc -l to count the number of files, and ls to view the filenames).

/bin: This directory contains 110 files on this system, mostly basic GNU utilities which were preinstalled with the distro's default packages. No executables related to GUI packages or user-installed packages appear to be currently located in this directory.

/usr/bin/: This directory contains 1726 files. While most executables related to packages installed by the user are visible in this directory, the majority of the files here are ones preinstalled with the distro's default packages. More GUI application packages, like xterm, own binaries located here.

/sbin: There are 153 files located here, most of these are binaries for programs related to management tasks likely to only be used by privileged users. These include programs for things like management of users, management of the hardware, and management of storage devices.

/usr/sbin: There are 264 files located here. The general purposes of most packages owning the binaries located here are for the most part similar to those in /sbin. The largest clear distinction I can make is that those located in this directory seem more like things that the user could choose to remove from the system, should they wish, while those in /sbin look far more critical and like they should properly be left alone for the system to function as the creators of the distribution intended.

2. Go through the files in /usr/bin and for each file, note whether it's a command you have used. Select 25 of the commands you haven't used and research what they do (use manpages and online resources).

The full list of which programs on my system I have used and not used is quite lengthy and can be found here.

as - This is an assembly language compiler -- not likely to be useful to me right now, as I'm not currently writing any assembly language code.
bashbug - This helps the user compose and send bug reports relating the the bash shell. At least this thing isn't set to go off automatically, the way the 'erorr reporting' features in some other operating systems are -- that's always really irritating to have to disable.
botti - This lets you run an irrsi module without a visible UI, and is apparently a popular way of writing bots. Nevertheless, I think I'm going to stick with doing the whole job of the IRC bot for our channel in perl, rather than as an irssi module.
dirname - This program strips the filename from a filepath, returning the path to the directory in which the file is located, I.E., doing 'dirname `realpath ./file`' should print the current working directory. Should be useful for scripts.
dirsplit - This program takes a directory of files and splits it into multiple directories each containing files up to a specified size limit. Should be useful for splitting up files to DVD sized folders for backup.
display - This program is part of image magic, and displays a specified image on an X server.
expity - This command checks whether the users password is near expiry, and if so, can force them to change it, if called with the '-f' option.
GET - This program stood out to medue to it's capitalized name. It turns out that it's something that comes with LWP. It issues an HTTP GET request against a provided URL and returns the requested resource or an appropriate error. Similar programs exist for HEAD and POST.
identify - This program identifies the format of an image file as well as additional information about various characteristics of the image.
id - This command prints information regarding a specified user. The default output seems to include the uid, gid and group memberships.
locale - This program provides information regarding vaious locale specific information like paper sizes, name, phone number and time/date formats. The output looks like it's formatted so it can be easily executed to set environment variables in a shell.
mogrify - This appears to be a command line program for modifying images. The manpage describes a variety of filters (blur, crop, dithe, etc.) that can be applied to the input file as well as several other operations such as resizing that may be performed upon it.
montage - This program creates a composite image out of a set of discrete images. The images can be tiled, and a border, fram or name applied to the resulting output file.
mousetweaks - This program is a daemon providing mouse related accessibility functions within the GNOME desktop environment. It appears to normally be called with the '-e=STRING' switch, in which string is a string specifying the accessibility features to enable.
nl - This program is a filter which adds line numbers to a file. I can think of a few times back in OPS435 that this program would have come in useful...
on_ac_power - Unsurprisingly, this application does exactly what it's name implies, returning true if the computer is plugged into AC power and false if it is running off of a battery. I was more surprised that this already existed as a command than by what it does, it's one that could prove handy so I'll have to make note of it in case it should prove useful in a script.
realpath - Another one that looks like it could be very useful in scripts, it returns a canonicalized absolute path when provided with a filename as it's argument.
rev - This program is a filter which reverses the order of characters on each line of either standard input or an input file specified as an argument.
setarch - This program alters the output of uname -m, causing programs to believe they are running on a different architecture than they really are and behaving accordingly.
shred - This program overwrites a file with a series of random bytes, and optionally unlinks the file as well. This is useful to make a deleted file more dificult to recover, making deletion of files containing critical data somewhat more secure.
shuf - This program is a filter, randomizing the order or lines recieved via standard input and then outputting the result to standard output.
strfile - This file accepts a file containing a series of strings delimited by lines containing only a '%' sign (or other delimiter specified using the '-c' switch) and produces a datafile containing both these strings in a binary format and a table of contents containing offsets to locate the beginning of each string, allowing random access to strings contained within the datafile.
toe - At first I figured this might somehow be related to head or tail, but it turns out that this command lists the terminal types available to terminfo.
unstr - This program goes along with 'strfile', reversing the process, accepting the datafile as input and producing the corresponding plain text file as it's output file.
volname - This program returns the volume name of a CD-ROM containing an ISO-9660 file system.
w - This command stood out because of it's single character name, and turns out to be a utility listing logged in users along with the processes they are running, as well as various details about these processes.

3. That's what this post is. ;)

Introductory Post (LPT730 Lab #00)

Hi everyone, nice to meet you all. My name is Gregory Masseau.

As far as educational background, I am a recent graduate (with honors) from Seneca's Computer Networking and Technical Support program. As far as work, I have run my own computer repair and services business for several years as a small home business, operated an internet cafe for a period of about a year, and worked as a lab assistant in the Seneca CNS program.

My background with computers is pretty diverse, as I've played with at least a little of everything. I first started computing on an old Yamaha CX5M, an MSX computer running on a Zilog Z80 with a built in 4-operator FM synthesizer. Since then I've used the Macintosh operating system (classic), DOS, Windows starting from 3.1, OS X, Linux and BSD, as well as tinkering with a few obscure operating systems here and there. My main technical interests these days include networking, programming and audio production.

As far as networking goes, I have a small home network with maybe a dozen machines, about two thirds of which are mine, and run a number of services locally on a few small home servers including DNS, WWW, FTP, SSH, VNC, Ventrilo, SMB, NFS and MySQL. Most of the servers are now running Ubuntu Server, but there's still one left running Slackware 10.

On the programming front, I mainly work these days in Perl, Lisp and Scheme, with a little C when I have to and a little bash to tie it all together. I'm working on a number of projects, which you can learn more about on my user page on the wiki.

I'm also a bit of an amateur musician, and have a small home studio consisting of a handful of hardware synthesizers (Waldorf MicroQ, Roland Alpha Juno-2, Korg Poly 800), a few outboard soundcards, and some MIDI controllers and I/O boxes. I mostly use Apple's Logic DAW as a host these days, though I've used a number of others in the past including Cubase, FL Studio, Cakewalk and even a tiny bit of Pro Tools.

Extracurricular interests of mine include gaming (whether computer games or tabletop RPGs), audio production/composition, screen printing/stencilling, and spending far too much time on the computer.

NAD710 Lab #01

NAD710 Lab #01:

1. What is the kernel version of Linux on matrix?

I am a little unsure whether the desired answer is the version or the release of the kernel, as it is a common contraction for people to say 'version' or 'version number' when what they actually mean is the relase. As such, I provide both answers.The kernel version on Matrix is '#1 SMP Tue Feb 12 09:16:51 EST 2008', while the release is '2.6.18.8-0.5-default'.

2. What is the IP address and MAC address of the Linux machine on matrix?

The IP address of the eth0 interface on Matrix is '192.168.1.56', while the MAC address is '00:03:47:E9:89:BF'.

3. What is the network mask on matrix?

The network mask of the eth0 interface on Matrix is 255.255.255.0.

4. What are the network addresses of the Linux machine? (there should be three networks)

I belive I might be misinterpretting this quesiton, because as far as I can see, this machine is connected to two networks: the 192.168.1.0/24 network, and the 127.0.0.0/8.

5. What is the IP address of the gateway for the Linux machine on matrix?

The gateway used by matrix is located at the IP address 192.168.1.254.

6. What is the command to display all the currently loaded kernel modules?

The command 'lsmod' will show the currently loaded kernel modules.

7. Where is the file for the kernel module called "e100"?

The file for the e100 kernel module is located at '/lib/modules/2.6.18.8-0.5-default/kernel/drivers/net/e100.ko.

8. What is the MAC address for the network device that has the IP address 192.168.1.254?

The MAC address of the machine whose IP address is 192.168.1.254 is 00:0E:0C:7F:84:6F.

9. How do you display all the physically network addresses known by a Linux machine?

The command 'arp', located on matrix in the '/sbin/' directory, when executed using the form 'arp -a' will show all physical layer addresses of neighbouring machines that have been cached by the local machine.

10. What is the MAC address of the network device on the Linux machine on matrix?

This question appears to overlap with #2? Maybe I am misunderstanding it? It appeast to me that the MAC address of this machine* is '00:07:E9:F6:36:1f'.

* I completed portions of this lab on two different occasions, and was not logged onto the same node during the writing of the answer to question #2 as during the time of the writing of the answer to question #10.

Friday, September 5, 2008

Some podcasts that LUX students (and profs!) might enjoy.

I thought I might recommend some interesting podcasts that might be of interest to LUX students. There really is a surprisingly prolific community of podcasters doing some excellent shows about Linux and open source software, so you don't have to tire your eyes by limiting your learning to reading things off a page or screen - and as many of these shows are quite entertaining, it's not only educational but also recreational.

The Linux Action Show: This one probably has the most in sheer entertainment value, as well as providing a very nice way to keep up on a lot of news relating to Linux and open source that one might otherwise miss. The show is very slickly produced, which is all the more impressive given that it's entirely produced by two people in their spare time. Typical weekly segments include an overview of a new Linux-powered device every episode, a news review, a listener commentary section, and then usually either an interview or a discussion on some topic for the remainder of the episode. The energy between the hosts is excellent, they're both quite hilarious and their enthusiasm is infectious. An excellent show which I highly recommend.

FLOSS Weekly: FLOSS weekly is a high quality podcast hosted by Randal Schwartz (of 'Learning Perl' and 'Pearls of Wisdom' fame) and Leo Laporte (formerly of TechTV's 'Screensavers'). Each episode includes some discussion of recent news usually followed by an interview with some open source luminary or the developer of an interesting project. This show is very successful in finding well known, high profile interview subjects for their interviews, so you can expect to hear from some people doing very interesting work. The show can also be watched live with full video at Twit Live every Wednesday at 2 PM.

Going Linux: Probably the most educationally oriented item on this list, this podcast provides lessons in performing various basic operations and tasks in a Linux environment. This one is mostly geared towards new listeners, so some of the more advanced among you might not find much here, but for those just beginning to learn their way around the Linux world it should prove an invaluable resource in learning some of the basics. The hosts are friendly and quite thorough in their coverage of the topics they select, presenting them in a simple and straight forwards manner that will appeal to new users.

Lugradio: Sadly now defunct, there will be no new episodes of this show. A British Linux podcast whose episodes usually consisted of a roundtable discussion between it's hosts (which include the always entertaining Jono Bacon, community manager for the Ubuntu distribution), the show was both educational and hilarious. Despite the fact that the show has ended, the backlogs of this show are well worth listenning to.

The PDX Perl Monthly meetings: These podcasts are audio recordings of talks from the Portland Perl users group. These tend to be very in depth and technical and usually focussed on some particular module or other, so they can be a bit hit or miss depending on whether the specific topic they are discussing is one that interests you. However, it's certainly worth subscribing to, as when they do hit a topic of interest to you you're likely to learn quite a bit.

Perlcast: A wonderful, if infrequently released, podcast concerning the Perl language and the community around it. Episodes frequently include interviews with developers working on some very interesting Perl-related projects, as well as talks regarding various particular technologies and how to best make use of them from Perl.

If you're new to podcasts, they're simply audio files, resembling radio shows, distributed via an RSS feed - the name 'podcast' is at this point largely historical, and you certainly don't need an iPod - any computer will do. If you need a podcasting client (a 'podcatcher'), I recommend either the excellent bashpodder or RhythmBox for those who prefer to work in a GUI. And of course, if you have a Mac or Windows machine you can always make use of a client like iTunes.

If you have any other questions about getting yourself set up to listen to any of these (or other podcasts), or need any help setting up bashpodder so that it will run automatically as a cron job, please don't hesitate to get ahold of me! Additionally, I will go over a brief overview of both bashpodder and RhythmBox in my XWN740 lab for this week, so you can look there for more information about these clients.

Interesting thoughts...

Listening to the netradio show FLOSS Weekly episode 39, in which Simon Phipps from Sun is being interviewed by Randall Schwartz and Leo Laporte, and Simon has some interesting thoughts when asked a question regarding his use of a Mac and how that relates to his evangelism of open source software -- the bit that interested me went as follows:

SP: 'This is a pretty tough world if you decide to become an absolutist -'

LL: 'Yes, I guess so... Would you prefer a world where it was all open?'

SP: 'I absolutely would. I think to reach that world we have to be pragmatic with urgency. You know, I'm more than happy to use tools that work. What I'm not happy with is to use tools that work, but in such a way that I have no other options in the future. You know, the key thought for me is substitutability. I don't want to use any tool that couldn't be substituted for another tool in the future.'

Thursday, September 4, 2008

SYA710 Lab #00

SYA710 Lab #00:



#1. What is your full name?



My full name is Gregory John Masseau. Suppose you're no longer requesting student numbers in this question this semester?



#2. What is the output in steps 3 and 8?



'fdisk -l' on Ubuntu 8.04:

Disk /dev/sda: 10.2 GB, 10200547328 bytes

255 heads, 63 sectors/track, 1240 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x0008edf2



Device Boot Start End Blocks Id System

/dev/sda1 * 1 1181 9486351 83 Linux

/dev/sda2 1182 1240 473917+ 82 Linux swap / Solaris



Disk /dev/sdb: 62.0 GB, 62014404096 bytes

255 heads, 63 sectors/track, 7539 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x6ab1bd22



Device Boot Start End Blocks Id System

/dev/sdb1 1 609 4891761 83 Linux

/dev/sdb2 610 1826 9775552+ 83 Linux

/dev/sdb3 1827 3651 14659312+ 83 Linux

/dev/sdb4 3652 7539 31230360 83 Linux



'fdisk -l' on Fedora 8:

Disk /dev/sda: 10.2 GB, 10200547328 bytes

255 heads, 63 sectors/track, 1240 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x00058f46



Device Boot Start End Blocks Id System

/dev/sda1 * 1 51 409626 83 Linux

/dev/sda2 52 688 5116702+ 83 Linux

/dev/sda3 689 752 514080 82 Linux swap / Solaris



Disk /dev/sdb: 62.0 GB, 62014404096 bytes

255 heads, 63 sectors/track, 7539 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x6ab1bd22



Device Boot Start End Blocks Id System

/dev/sdb1 1 609 4891761 83 Linux

/dev/sdb2 610 1826 9775552+ 83 Linux

/dev/sdb3 1827 3651 14659312+ 83 Linux

/dev/sdb4 3652 7539 31230360 83 Linux



Disk /dev/dm-0: 16.1 GB, 16106127360 bytes

255 heads, 63 sectors/track, 1958 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x00000000



#3. What is the purpose of the sudo command?



The sudo command allows you to execute a command as another user. If no user is specified (which can be

done with the '-u username' switch), sudo will attempt to execute the command as the root user.



Depending on the configuration (generally configured using the '/etc/sudoers' file) a password (or other form of authentication) may be required, and the ability to masquerade as particular users or execute particular commands as a substitute user may be limited based on the user or group IDs of the user executing the sudo command.



#4. What is the purpose of the minus sign (-) when using the su command?



The dash signifies that su should simulate a full login by discarding the current environment and loading the substituted user's shell environment configuration in it's place.



#5. Explain how you deleted the partition table with fdisk.



I deleted the partition table using fdisk by way of cfdisk, a Curses library based interface to the fdisk utility. Within the cfdisk program, I highlighted each partition in turn using the vertical arrows keys, selected the 'Delete' option with the horizontal arrow keys, and hit enter. After all partitions were removed, and the partition list displays only a single entry whose FS Type field reads 'Free Space', I selected the 'Write' option, and hit enter to execute it. After exiting cfdisk, the 'partprobe' command was invoked to inform the kernel of the changes to the partition table.



#6. What is the purpose of the partprobe command?



The 'partprobe' command causes the kernel to re-read the partition table to reflect changes since the partition table was last read by the kernel, this time generally being the last time the machine in question was booted. This has surprisingly little effect on system behavior as the Linux kernel does not make much use of the partition table for normal operations once it has completed booting.



#7. Write the complete mail command you would use to email a copy of lab00 to your LEARN account from MATRIX.



I would invoke the mail command using the following form:



'mail -s "ops335-lab00" john.selmys@senecac.on.ca < lab00.txt



#8. What is the function of the -s option to the mail command?



The '-s' switch of the mail causes the mail command to make use of the argument following the '-s' switch as the subject line of the email being sent.