Analyzing The SparkFun USB Breadboard Power Supply Kit

As part of a batch order through Digikey, I recently picked up SparkFun’s USB Breadboard Power Supply kit. This is a small and convenient drop-in module that connects to a breadboard and provides 5V and 3.3V power sources through a USB 2.0 Type-B connector.

Though I had a great time building and using this kit, I also wanted to provide an analysis of the circuit as I feel it could use some documentation for the novice kit builder at which it is targeted. While this is a fairly simple kit to work with, I had trouble finding step-by-step instructions and think it’s important for a kit of this sort to educate the builder while serving a practical purpose.

Here is the schematic provided by SparkFun:

Let’s start at the left side of the image with the USB input. The USB 2.0 standard specifies a power source of 5 volts at 500 milliamps. The 5 volt line is connected through an SPDT-style slide switch. The 2 data pins on the USB plug are left unconnected as they don’t serve a practical purpose in this circuit.

From here, we have two more components between the 5V input line and the LM317 input pin. R3 is a PTC-type resettable fuse – this is similar to a thermistor and serves as a protection mechanism for the circuit. If the amount of current delivered exceeds 500mA, R3 will cut the input line until the overcurrent is removed. Capacitor C1 then serves as a filter to cut high-frequency noise or “ripple” from the input power source.

The circuit centers around the LM317 voltage regulator. This 3-pin IC can output up to 1.5A of current at voltages ranging from 1.2V all the way up to 37V. In our application, it’ll accept the 5V source from the USB connector and output 3.3V from its output pin.

The LM317 uses two resistors labelled R1 and R2 to set the 3.3V output voltage. This voltage is determined by the formula V = 1.25V * (1+R2/R1). Substituting the values of our resistors, this gives us 1.25 * 1+(390/240), which equals 3.28125 volts, which is within a 1% tolerance of our target of 3.3V.

The resulting 3.3V line is then connected to two capacitors. C2 and C3 are 10uF and 0.1uF capacitors respectively and are intended to provide ripple rejection and improve transient response.

Completing the circuit, we have two more branches – one is to our power LED, which is marked as LED2 and is protected by a 330 ohm resistor labelled as R4. Finally, the other branch leads to JP3, which is our final 3.3V output.

I don’t have too much to say about the actual soldering process – it’s fairly straightforward and should be a reasonable challenge for a novice. I would mostly suggest being mindful of the orientations of the polarized components – those are the LM317 voltage regulator, power LED, and electrolytic capacitors. There are two helpful guides for placing the components, those being the schematics, and also the markings on the PCB.

Beyond the more obvious applications of this circuit, we should now also have a good idea of how this type of design could be adapted. For instance, we could swap the USB input with a 5V DC power supply connector, or we could alter the output voltage by replacing our two adjustment resistors with a pair using different resistance values. I suggest taking a look at the LM317 datasheet provided by Texas Instruments, as it provides some additional sample circuits and describes how this power supply could easily be converted to a battery charger with the addition of a shutoff transistor.

To sum up, this is a great starter kit with practical value that can be had for under 20 dollars. In my case, I already have a computer with a USB port sitting on my bench, so I no longer need to use up an AC outlet for a wall wart power supply. It’s great for anyone needing a quick 5V or 3.3V power source to their breadboard, and is an excellent way to learn about basic power supply design and the use of the LM317.

You can purchase the kit here – https://www.sparkfun.com/products/8376

On to RHCE

I recently scheduled my RHCE exam for February, meaning I’ll be spending the next month studying the course material and ensuring I have the necessary skills under my belt. I had originally intended to complete the test last July, but suffered from burnout following an extended period studying for RHCSA and was forced to re-schedule.

IT certs and self-paced learning can be tricky. They require a strong time commitment and can often be overwhelming in their breadth. The educational market is flooded with 1000-page study guides for 20-objective simulation-based exams, and it’s sometimes difficult to separate the essential portions from the “fluff” or extraneous topics. Further complicating matters, test objectives may not always reflect the most common tasks for a sysadmin or engineer, leading to frustration and resentment over failed exams.

While I’m generally interested in the topics I study, I always find myself fighting distraction. Studying is rewarding, but it’s easy to get pulled astray by the instant gratification of unrelated reading, video games, and the like. Sometimes personal circumstances get in the way as well. This leads to under-preparedness, and if the student’s confidence isn’t there, the natural inclination is to re-schedule the exam rather than fall short and waste the money spent on registration.

Pacing is also important: over-studying can also be detrimental, and the student should ideally prepare with a degree of desperation and eagerness to learn in a short period. If I set my test date too far in the future, I’m more likely to skip study days and feel less motivated by the time the test rolls around.

I feel that personal accountability is the main tool to combat these challenges. Announce your intent to write an exam on a certain date to your friends, family, and boss. If possible, join a study group, or find others writing a similar exam and offer peer support. Many computer-based training sites will offer some sort of scheduling to go along with their course material – take advantage of this functionality, make a schedule, and hold yourself to it.

Past experience has shown me that a little luck goes a long way as well.

As a study aid, I’ll be adding a few short, RHCE-relevant articles in the next while. Though there’s no shortage of information on the exam topics available online, I still hope that they will be helpful to prospective students and sysadmins.

SSH Tips: Key-Based Authentication, Remote Commands without Interactive Login, and SSHFS

SSH is an important and commonly-used tool for remote shell access to Linux and Unix systems. This post will cover some tips for effectively using SSH and the closely-related SSHFS to run commands and access files on a remote host.

We’ll work with a pair of client and server computers running a Red Hat or Debian-based Linux distribution, then configure key-based authentication between the two. We’ll then use SSH to login to the server from the client (without using a password!), look at how we can run commands on our server without interactively logging in, then use SSHFS to remotely mount the server’s filesystem to our client computer. Most of the commands described will need to be run with superuser privileges, either by prefacing them with sudo or using the su command to switch to a superuser account.

Key-Based Authentication

First, we’ll want to ensure the SSH daemon is running on our server with the systemctl status sshd command. If the sshd service is not running or enabled, we may need to start it on a one-off basis (systemctl start sshd), enable it (systemctl enable sshd if we want the service to start persistently in the future when the server is rebooted), and/or install it (if it’s missing entirely – yum install openssh-server for Red Hat-based distros or apt install openssh-server for Debian-based distros should do the trick in this case).

Once we know the sshd service is running on the server, we can head back to our client to configure key-based authentication. This enables us to store a private encryption key on the client side, upload a corresponding public key to the server, then use these two keys to authenticate and communicate via SSH with the server.

The ssh-keygen command generates the private and public keys (using a random “seed” value), then prompts us to secure the key pair with a passphrase. This is not necessary, but can be used to provide an additional layer of protection.

user@CLIENTPC:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is: (fingerprint here)
The key’s randomart image is:
(randomart image here)

Next, we use the ssh-copy-id command to upload the public key to the server. The key is stored in the ~/.ssh/authorized_keys directory for the target user account on the server.

user@CLIENTPC:~$ ssh-copy-id user@SERVERPC
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/home/user/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
user@SERVERPC’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘user@SERVERPC'”
and check to make sure that only the key(s) you wanted were added.

Once this is complete, we can edit the /etc/ssh/sshd_config configuration file on the server to enable authentication using a key pair without the use of a password. This carries the advantage of preventing unauthorized individuals from logging into your account by guessing your credentials, as well as saving you the hassle of remembering an extra password.

The lines to change in the configuration files are as follows –

ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no

Once these changes are saved, restart the sshd service using the systemctl restart sshd command to apply your changes. Future logins via SSH from your client machine to the corresponding account on the server will no longer require a password to be entered, as the client is authenticated and communications are encrypted using the private/public key pair.

It is important to note that this change affects all users, and key-based authentication will be required once these changes are made. If you lose your private key, you’re unable to log in via SSH until the config file on the server is edited to re-enable password authentication.

Remote Commands without Interactive Login

While SSH enables interactive login to a host machine, this isn’t always necessary or preferable. We may only need to quickly run a single command on a server, in which case we can pass it as part of our SSH connection string –

user@CLIENTPC:~ $ ssh user@SERVERPC echo “Hello World!”
Hello World!

Note that the output of the command is streamed to the console of the client machine specifically. For this reason, if I had passed ssh user@SERVERPC echo “Hello World!” > myfile, the “Hello World!” output would have been saved to a file called myfile on the client PC, not to the server.

At face value, this doesn’t seem to be much more than a marginal time-saver. However, this also enables us to script commands to be executed on a remote machine (or machines) in sequence. For example, I could write a script to query for free disk space on a series of servers and save the output to a single text file on my client machine.

SSHFS

One of my favorite SSH-related tools is SSHFS, which enables us to mount a remote server’s file system locally through SSH without the need for any additional file-sharing services such as NFS or Samba.

As this tool is separate from SSH, it usually isn’t installed by default on most distros. This can be resolved with yum install fuse-sshfs on Red Hat-based systems or apt install sshfs on Debian systems.

Once SSHFS is installed on the client, we should first create a mount point (ex. mkdir /mnt/mymountdir).

We then mount the remote target to our client PC. In its simplest form using the client and server examples mentioned above, the command would be as follows –

sshfs root@SERVERPC:/ /mnt/mymountdir/

This particular example enables us to mount an entire remote system to our client. However, we could also tailor our command to mount a more narrow portion of our filesystem such as a user’s home directory, if needed.

Once we’re done, we can unmount the filesystem with umount /mnt/mymountdir/ , as we would for any other mounted filesystem.

Though we’ll be prompted for a password in the example provided above, key-based authentication can be configured for SSHFS as well. Using this sort of authentication would also allow us to use SSHFS with the /etc/fstab file to automatically mount a filesystem at boot time, allowing for persistent file sharing over a local network or the Internet on any system supporting the SSH protocol.

 

Linux From Scratch: Fun, Rewarding, Occasionally Confusing. (Parts 1-2)

The authors of Linux from Scratch (LFS) recently celebrated the release of version 8.4 of their book on constructing a working Linux system entirely from source code. I’ve been meaning to dive into this project for a while and figured this would be a good chance to provide a first-hand report of my experiences.

Linux from Scratch is an excellent educational opportunity at a time when desktop Linux is continually becoming more user-friendly and abstracted through a never-ending stream of window managers and desktop environments disguising its internals. Gone are the days of Ncurses (or text mode) installers and arcane error messages; most mainstream distributions are as quick and simple as a Windows or OS X install. Though this is welcome in a practical sense, we lose the customizability and insight we once had in configuring a system from its component parts.

Conversely, LFS deals in the business of compiling, configuring, and text-file editing, stripping away the creature conveniences of a desktop-oriented distro. Even the source packages themselves are manually downloaded using wget – we’re provided only with a package list and instructions to gather them from the appropriate locations.

The familiarity the reader gains with the Linux command-line ecosystem as a result of this manual configuration process is one of LFS’s major strengths. Even as a veteran Linux desktop user, there are many components with which most of us will never interact on a day-to-day basis. Non-developers don’t need to care about GCC, correct? What even IS binutils? No need to worry, this book will answer these questions in more-than-sufficient detail.

There’s also a large amount of insight to be gained on the structural challenges of the build process. The reader will learn of concepts such as circular dependencies (“to compile a compiler, you need a compiler”) and testing package compilation to ensure the preceding configuration was completed correctly. Additionally, the book discusses multiple “passes” or iterations of package builds as the associated toolchain and libraries change. Our first passes pull from the host system entirely, while later passes incorporate libraries and tools built to comprise part of the eventual LFS system. As a significant portion of the packages installed are themselves an essential part of the build process, we get to see how they function and rely on each other along the way.

LFS is also a great tool to help differentiate the core and commonalities of Linux vs. the GNU/userspace utilities and extra functionality and differences afforded between the myriad desktop distributions. It makes it easier to understand the absolute essentials of a Linux system, as well as the niceties and redundancies that can be otherwise dispatched based on the user’s needs. Like the vast majority of distributions, LFS follows the POSIX standard (for system design), FHS (for a unified filesystem hierarchy), and LSB (for core Linux components – Core, Desktop, Runtime Languages, and Imaging).

As a foreword, it’ll quickly become apparent that this isn’t a good project for an absolute Linux beginner. Some working knowledge of basic file management commands (cd, ls, rm, mv, cp, and tar, for instance) will be required, as well as a bit of intuitive knowledge of why errors can occur. The project is well-supported through mailing lists and IRC (irc.freenode.net #LFS-support), and I was able to easily get help for any issues I saw, but there were also a few minor problems that I was able to solve on my own. I’ve left most of these out of these examples, as they usually related to typos (my own – the book is impeccably edited), forgetting to set environment variables, or executing commands from an improper directory for the task.

Additionally, due to the length of this process and number of steps involved, it’s difficult to write about this project without it seeming like a log. I’ll try and avoid this where possible, and won’t be including much in the way of commands/output. Please don’t think of these articles as an instructional guide – the LFS book is the go-to resource, and its processes should be followed verbatim.

The book is issued in versions for both the sysvinit and systemd init systems, but I will be working with the latter due to its widespread adoption within most modern distributions. I should also note that there are PDF and HTML versions of the text. Having experience with both, I’d recommend the HTML route, as the commands can be a little more difficult to copy-and-paste from the PDF version.

I will be building x86_64 “pure” with no 32-bit support, though there are also instructions available for PowerPC and ARM. My host system (or VM, more accurately) is running Ubuntu Server 18.04 .

Polishing Up The Host

LFS is cross-compiled from an existing Linux distribution. As the build process progresses, the new Linux system is gradually separated from its host environment until it reaches the point of being bootable on its own.

With this in mind, ensuring the host’s software packages are up to date is an important first step to building an LFS system. Fortunately, the author includes a quick version-checking script included at the beginning of the book’s second chapter to ensure the host machine is up to speed.

I ran this script on my host and received the following results –

bash, version 4.4.19(1)-release

/bin/sh -> /bin/dash

ERROR: /bin/sh does not point to bash

Binutils: (GNU Binutils for Ubuntu) 2.30

bison (GNU Bison) 3.0.4

/usr/bin/yacc -> /usr/bin/bison.yacc

bzip2, Version 1.0.6, 6-Sept-2010.

Coreutils: 8.28

diff (GNU diffutils) 3.6

find (GNU findutils) 4.7.0-git

GNU Awk 4.1.4, API: 1.1 (GNU MPFR 4.0.1, GNU MP 6.1.2)

/usr/bin/awk -> /usr/bin/gawk

gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0

./version-check.sh: line 31: g++: command not found

(Ubuntu GLIBC 2.27-3ubuntu1) 2.27

grep (GNU grep) 3.1

gzip 1.6

Linux version 4.15.0-45-generic (buildd@lgw01-amd64-031) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019

m4 (GNU M4) 1.4.18

GNU Make 4.1

GNU patch 2.7.6

Perl version=’5.26.1′;

Python 3.6.7

sed (GNU sed) 4.4

tar (GNU tar) 1.29

./version-check.sh: line 43: makeinfo: command not found

xz (XZ Utils) 5.2.2

./version-check.sh: line 45: g++: command not found

g++ compilation failed

The fun begins! Most of our software is up to date version-wise, but a few important dependencies are missing.

Believe it or not, the alias of /bin/sh to /bin/dash isn’t a typo – dash is a shell used in modern Debian/Ubuntu derivatives as a quicker bash replacement. However, we’re required to fix the symbolic link to point to /bin/bash for the purpose of building LFS.

root@menikmati:~# rm /bin/sh

root@menikmati:~# ln -s /bin/bash /bin/sh

We’re also missing g++ and makeinfo. These dependencies were resolved with the apt install g++ and apt install texinfo commands, respectively.

A bare minimum install of LFS is around 6GB, though it’ll be helpful to have some extra space handy when building our system. Though the book goes into creating multiple partitions and mount points (root, /home, /boot, etc.), I settled for a single root partition of 15GB and a 2GB swap partition, as I don’t require the system for day-to-day use and wanted to keep the partition layout as simple as possible. I then formatted and mounted both partitions.

Next, the LFS enviroment variable is set. This variable points to the folder in which LFS will be built on the host system. In my case, I had previously mounted my new root partition to /mnt/LFS, and set the environment variable to reflect this location.

With all that out of the way, it’s time to download packages! LFS recommends specific version numbers and provides a handy wget-list for the purpose. This took about 3 minutes on my test VM over Wi-Fi.

Once the packages are downloaded, we finalize our preparation by creating a tools directory within our LFS directory and symlinking it to /tools. To prevent accidental changes to the host system, a lfs user is created and given ownership of the LFS directories. Environment variables are added to the lfs user’s .bashrc and .bash_profile to support the build process.

First Builds

As build time can vary drastically depending on the host system, LFS uses a term called SBU (Standard Build Unit) to express how long a package takes to compile and install. binutils (the first package to be installed) takes 1 SBU to complete. The binutils build is timed by the user, then all other build times in the book are listed relative to the build time for binutils.

The instructions also touch upon “test suites” – a set of checks to determine a package built and installed itself correctly. These are largely skipped in this section, as dependencies are not yet in place for most of the tests.

As mentioned in the introduction, we start by building a minimal system and toolchain to construct a final LFS system. The toolchain includes a compiler, assembler, linker, libraries, and utilities that are used to build the other tools. This toolchain is isolated from the eventual “final” system and stored in $LFS/tools.

Before we begin building packages, we need to identify our “Target Triplet” or name of the working platform. We’ll also need the name of the platform’s dynamic linker/loader, not to be confused with the standard linker (ld) that is part of binutils. For my 64-bit system, this should be ld-linux-x86-64.so.2, though you can check with readelf -l <name of binary> | grep interpreter.

Our first packages are binutils (a cross-linker) and GCC (a cross-compiler). Temporary libraries are cross-compiled, while we also set the GCC source to tell the compiler which target dynamic linker will be used.

It was at this point that I hit my first challenge of the project, as GCC failed to build properly due to a missed option in the pre-build configuration process. After racking my brain with no success, I reached out to the #lfs-support channel on Freenode and promptly got the help I needed.

Next, the sanitized Linux API headers are installed, allowing Glibc to interface with the features the kernel will provide.

Glibc is then installed, referencing the compiler, binary tools, and kernel headers already present.

This is followed by a second pass of binutils referencing the proper library search path to be used by ld. I hit another snag at this point, as I failed to alias /tools/lib to /tools/lib64. In this case, I was able to identify and resolve the issue by searching the linuxquestions.org message board.

GCC then sees a second pass with its sources modified to reference the newly-installed dynamic linker rather than the one present on the host system. Once this is complete, the core toolchain is self-contained and self-hosted and is no longer reliant on the host.

From here, we build a series of utilties – TCL, Expect, DejaGNU, NCurses, bash, bison, nzip2, coreutils, diffutils, file, Findutils, gawk, gettext, grep, gzip, make, patch, perl, python, sed, tar, and xz. These all installed smoothly without much in the way of additional configuraton.

Once the packages are installed, we then strip unneeded debugging symbols, documentation, and temporary build directories to reclaim disk space.

Finally, ownership permissions for the $LFS/tools directory are assigned to the root user, as our work with the lfs user is complete. My next article will cover Part III of LFS, in which we chroot into the new system and use our toolchain to continue building the LFS system.

Presentation to KWLUG on i3 and Tiling Window Managers – January 7, 2019

I recently made a presentation to the Kitchener-Waterloo Linux User Group on the topic of i3 and tiling window managers. The folks involved were kind enough to record the audio feed and have made it freely available through archive.org.

If you’re interested, you can check out my presentation (as well as audio feeds for nearly 60 past meetings) at the following link.

https://archive.org/details/kwlug-2019-01-07-wm-wsl

My Experience With LPIC-1 Certification

I recently completed the LPIC-1 certification offered by the Linux Professional Institute, which tests candidates on Linux internals and system administration.

LPIC-1 certification is broken down into two exams: 101-400 and 102-400. 101-400 covers topics such as Linux system architecture, installation, package management, devices, and filesystems. Conversely, 102-400 explores shell scripting, X11/GDM, service management, basic network configuration, and security concepts such as user and group permissions. The objectives emphasize the location of crucial system and configuration files, while also delving heavily into command line utilities and their respective options and switches. There’s also a focus on the use of vi as a text editor, as well as some rudimentary exploration of SQL using MariaDB, which I thought was a good general-purpose addition for any aspiring sysadmin.

As the certification is vendor-agnostic, the course objectives cover both Red Hat and Debian derivatives, including their respective package managers and distribution-specific utilities. At times, this became a bit overwhelming, but I understood the need for a prospective Linux sysadmin to work with both alternatives due to their ubiquity and market share. A little more puzzling was an equal focus on both System V and systemd init systems, which feels less essential in the present day. Despite its detractors, systemd has taken hold as a standard in the Linux community, and I wouldn’t be shocked to see System V init abandoned entirely in future versions of the certification.

I was disappointed when I discovered that the exam questions are all multiple-choice. While rote knowledge of the commands and concepts is impressive, the lack of simulation-based content may turn off prospective employers that seek a more practical test of a candidate’s technical knowledge. Many certs have been devalued by cheating and freely available online “brain-dumps”, and I doubt these exams are any exception to the rule.

Speaking as a casual Linux user since the mid-1990s, I was shocked by how unfamiliar the content felt. There’s a particular emphasis on system administration and management that the average user typically won’t touch in the vast majority of cases. If anything, I think this speaks to the ease of use of most modern Linux distros; for instance, most home users don’t have to consider their hard drive’s partition layout during installation, nor do they have to toil on the command line to configure their system when graphical desktop environments such as GNOME lay the options out in a user-friendly manner and provide all the necessary buttons and sliders.

My study materials were a combination of the LPIC-1 video course available at www.linuxacademy.com, the course objectives from the LPI website, and a selection of “how-tos” gleaned from various websites. It’s important to stress the use of multiple information sources in combination – given the breadth and depth of the exam objectives, I don’t think any one source would have helped me pass the exams on its own.

Drawbacks aside, I feel the certification is still worth the time and money (~$400 US, with various vouchers and discounts available to offset the cost). The knowledge I gained as a relative novice was a good return-on-investment and would serve as a good stepping stone to a more intensive and practical certification such as those offered by Red Hat. As such, I’d recommend LPIC-1 to anyone seeking a certificate reflecting a vendor-agnostic approach to Linux system administration.

Folding@Home and Distributed Computing

Folding@Home is an open-source distributed computing project launched by Stanford University professor Vijay Pande in October 2000. It aids in disease research by simulating the myriad ways in which proteins “fold” or assemble themselves to perform some basic function. Though protein folding is an essential biological process, mis-folding can lead to diseases such as Parkinson’s, Huntington’s, and Alzheimer’s. Consequently, the examination of folding models can help scientists understand how these diseases develop and assist in designing drugs to combat their effects. As of March 2018, 160 peer-reviewed papers have been published based on results obtained from Folding@Home simulations.

Distributed computing describes the method of a larger task being broken down into portions and shared across multiple computers. In the context of Folding@Home, client PCs download a “work unit” from the project’s work servers, perform the computational work needed to model the protein’s folding, then re-upload the results to a server when complete. The workload behind this folding is significant, and there may be a large number of work units involved in one specific model.

The Folding@Home client software is installed on a user’s PC and is commonly configured to sit idle until the PC is left unattended for a period of several minutes, similar to how one would use a desktop screensaver. The software then consumes idle CPU and GPU resources to perform its task (this is also configurable, as pushing CPU and GPU usage also increases energy consumption and results in excess heat generation). Several computing platforms are supported; beyond the commonly available Windows, OS X, and Linux clients, versions of the software have also been developed for the Sony PlayStation 3 and Android mobile devices.

Though distributed computing models are extremely common in a business/scientific context (weather modelling, graphic rendering, and cryptocurrency mining all share a similar approach), most of these examples rely on a centralized cluster of computers owned by a single company or research organization, often benefitting the financial interests of the singular organization rather than accomplishing a public good.

In contrast, Folding@Home is part of a smaller subset of “volunteer computing” projects intended to reach its goal through harnessing the computational resources of hobbyists. SETI@Home is arguably the most well-known of these projects and is devoted to the search for extraterrestrial life based on the analysis of radio signals. Enigma@Home has assisted in decoding previously-unbroken messages encrypted by the German Enigma machines during the Second World War. The European research organization CERN also threw its own hat in the ring by offloading portions of its research around the Large Hadron Collider to volunteer computing enthusiasts.

Such projects are not always benevolent: distributed and volunteer computing can also be used as a destructive force. Botnets and distributed denial-of-service (DDoS) attacks are two common examples of this phenomenon. The LOIC (Low Orbit Ion Cannon) is a notable DDoS application used by the Anonymous group over the past decade to deny public access to websites they deemed objectionable.

I’ve personally been lending my resources to Folding@Home since the summer of 2010 and recently reached the top 1% of nearly 2 million contributors. Beyond the obvious philanthropic qualities of the project, it’s also taught me a great deal about a variety of computer science concepts. Hardware selection, performance tuning and performance monitoring all play an important part in optimizing a folding cluster, making Folding@Home is a great starter point for aspiring home labbers and sysadmins.

You can find out more about Folding@Home here – http://folding.stanford.edu/

 

 

 

Design Flaw

The Dell Precision M4700 laptop, while generally dependable, has a woeful quirk in its construction that’s caused a good share of frustration for yours truly.

The Mini-SD card slot is located on the left side of the laptop, situated just millimetres above the slot-loading optical combo drive.

Many a time, I’ve grabbed a card off my desk, blindly reached around the side of the chassis, and inadvertently popped the card right into that slot-loading drive.

Crap. Reach for a paperclip, or a credit card or a key or something like that. Hope that it doesn’t get hauled into the drive as well, while we’re at it.

It’s probably happened a half-dozen times in the last year.

I mean, this wouldn’t be a problem if I didn’t keep making the same mistake over and over.

What’s that whole saying about a chain only being as strong as its weakest link?

Documentary Review – Viva Amiga: The Story of a Beautiful Machine (2017)

Viva Amiga: The Story of a Beautiful Machine is a 2017 documentary by director/producer Zach Weddington detailing the history of Amiga Inc. and their eponymous line of home computers. The company’s trajectory is charted from its beginnings in the early 1980s, through its acquisition by Commodore International and launch of the Amiga 1000 in 1985, into the platform’s demise in the mid-1990s. Weddington crowd-funded the project from a 2011 Kickstarter campaign, culling interviews from former Amiga and Commodore employees including engineers Bil Herd and Dave Haynie, software developer Andy Finkel, and Amiga 500 hardware mastermind Jeff Porter.

Considerable care is taken to properly frame the Amiga’s story within the context of the 1980s home computer market, which was substantially more heterogenous in terms of brands and hardware versus its modern counterpart. Though the Amiga was best positioned to compete with the features and target audience of the Apple Macintosh, the home-computing scene was also saturated with offerings from big-business monolith IBM, Atari (headed up by Commodore ex-pat Jack Tramiel), and Tandy/Radio Shack. The Amiga unfortunately also competed with Commodore’s own C128, which it had marketed in parallel as a more cost-effective alternative, undercutting the Amiga’s adoption.

A large portion of Viva Amiga charts the development of the Amiga 1000 under the direction of company founder Jay Miner, who demonstrated his faith in the project by taking out a second mortgage to help finance its production. Footage of the 1985 launch event is included and does a superb job of illustrating the excitement and novelty surrounding the A1000 at the time. Other elements of the launch reflect current tech industry tropes; guest appearances (insert Andy Warhol here), shaky software demos, and the promise of cutting-edge products with little or no actual stock available.

Pause for personal reflection: I first laid hands on a hand-me-down Amiga 500 in 1994, accompanied by the requisite stack of cracked-and-duplicated floppy disks. Most of my hours were spent playing games like Wings and Life and Death, but I also remember being intrigued by the skeumorphic approach of the Amiga Workbench. Having never seen a Mac or Atari ST, I was shocked by the intuitiveness of this GUI and relation to a real-world working environment, especially when taking the software’s age into account. Next to the Amiga 500 sat an IBM PC clone running DOS 6.22 and Windows 3.1. Windows had desktop icons and program groups; Workbench had file drawers, folders, and a recycle bin.

With this point in mind, the documentary takes a compelling turn when examining the Amiga’s role in digital content creation, breaching the subject of computing for its own sake versus computing as a means to an end. With the Amiga’s simple, effective user interface and increased graphics and sound capabilities relative to those of its predecessors, an argument is made for the platform as a pioneering media production tool, eliminating a layer of abstraction between the operator and computer and allowing users to seamlessly explore traditionally non-digital creative fields such as animation, music composition, and visual art (the latter of these no doubt spurred by the release of a non-copy-protected version of Deluxe Paint). The massive popularity of NewTek’s Video Toaster is treated with the appropriate level of gravitas, while the documentary’s commentators also point to the platform’s foothold in the CGI and 3D modeling fields.

The Amiga’s eventual downfall is almost universally attributed to Commodore’s miserable approach to marketing and odd placement in the retail sphere, issues which were compounded following the ouster of COO Thomas Rattigan in 1987. Commodore’s fate is sealed by misguided projects such as the Amiga CD32 and CDTV, segueing into a two-pronged epilogue; the failure of larger commercial ventures based around the Amiga’s intellectual property, as well as an overview of the homebrew “Amiga-in-name” hardware and software released by enthusiasts throughout the rest of the 1990s into the present day.

Viva Amiga best serves as an entry point to the Amiga ouevre for budding retrocomputing enthusiasts or tech historians, while offering a hefty dose of nostalgia and fuzzy-feelings to veterans and die-hards. Given the esoteric nature of the subject matter, some familiarity with the platform is assumed, but the narrative never becomes too technical. A deep selection of file footage highlights the major events, the interviews are well-edited and relevant, and the interviewees are dynamic and engaging (particularly Workbench architect RJ Mical, who propels Viva Amiga‘s watch-ability up a few notches on sheer enthusiasm alone).

Viva Amiga: The Story of a Beautiful Machine (official website)