Retro Programming With “Fantasy Consoles”

You may be wondering what I’ve been doing this year since there’s been no posts. Readers, it’s because I’ve found something wonderful. I’ve found fantasy consoles. What is this, you may ask? I’ll tell you. Basically it’s a full game development environment, a bit like a retro emulator. The difference between a retro emulator and a fantasy console is that crucially, the fantasy console replicates the programming constrictions of something that never actually existed. Typically, a fantasy console game development environment also includes in one complete package everything you need to create games – a text editor to write your code, a sprite and map editor to create your graphics and maps/levels, a sound effects editor and a music editor.
Importantly, because these things emulate the hardware restrictions of some old piece of hardware there are various limitations to what you can do. Each fantasy console product (which we’ll get to in a minute) has a standard set of artificially imposed development constraints, such as restricted graphical resolutions, color palettes, RAM usage, or storage space. Often, sprites can only be a certain size with certain colours, you can only have so many lines of code before the memory is “full”, things like that.

And it’s fantastic.

Part of the allure of these things is that, due to the limitations each system imposes, your games can only be so big or complex. This not only forces you to think about what you really want to do but it also means that projects become completable….there’s literally no feature creep because there’s no room in the tiny runtime for it. It also means that, like the good old days of C64 and ZX Spectrum, games developed with each system tend to end up sharing the same aesthetic. All fantasy consoles seek to emulate the aesthetics and community of 1980s home computing for a modern audience.

So if you want to recreate the simple games of your youth, just enjoy the chunky pixel aesthetic or are simply short on time or knowledge and want to create your own little video games, here is a list of the most popular fantasy console environments available at the time of writing.

PICO-8
The PICO-8, first released in 2014 by Voxatron developer and creator Joseph “zep” White, is for “for making, sharing and playing tiny games and other computer programs.” This was the first “fantasy console” and is arguably the most famous. It comes with a (cut down) Lua interpreter for the code, a sprite and map editor, sound effects editor and music editor. The programs you create on PICO-8 can – ingeniously – be stored in a PNG image which PICO-8 will load happily that looks like an old video game cartridge with your game’s screenshot as the label. They can also be saved and distributed at *.p8 files. There is also an export feature for a Phaser-based HTML5 application so anyone can play what you’ve created. Native platform export binaries functionality is planned for the next release. PICO-8 has a token limit (the amount of code you can write) that is quite harsh but it keeps your ideas small-scale and it’s fun to see how much stuff you can squeeze into each game.

Key Features

Cost: $14.99 USD.
Proprietary?: Yes.
Video Resolution: 128×128.
Palette: Fixed, 16-colour.
Language: Lua.
Export formats: HTML5/JS (browser), *.p8/*.p8.png cartridges (PICO-8 files). Native standalone binaries planned for 1.0 release (Win/Mac/Linux).
Built-in features: Code editor, Sprite Editor, Tilemap Editor, SFX Editor, Music Editor, Screenshot tool, GIF Recorder, and SPLORE (internet connected cartridge browser connected to the PICO-8 BBS where you can upload your creations)
Platforms: Windows, Mac, Linux, Raspberry Pi, PocketC.H.I.P.

TIC-80
The TIC-80 is a worthy adversary to the PICO-8 — with a wider, higher resolution display, a customizable 16-colour palette, 64k code (token) limit and up to 256 8×8 foreground sprites and 256 8×8 background tiles. The TIC-80 is a powerful fantasy console, although I would describe this as a fantasy computer to PICO-8’s fantasy console. It also works on Android! This would be my personal second choice after PICO-8.

Key Features

Cost: Pay what you want.
Proprietary?: Yes.
Video Resolution: 240×136.
Palette: Customizable during development but fixed at runtime. 16-colour.
Language: Lua/Moonscript.
Export formats: HTML/JS (browser), *.tic cartridges (TIC-80 files). Native binaries (Win/Mac/Linux).
Built-in features: Code editor, Sprite Editor, Tilemap Editor, SFX Editor.
Platforms: HTML5/JS (browser), Windows (UWP), Mac, Linux, Android.

LIKO-12
LIKO-12 is built on top of the rather awesome LOVE2D game engine which itself runs on Lua. This is basically an open source clone inspired by PICO-8 but without some of the limitations. It comes with a wider display, no token limits, more graphic memory and a different API…so while some code may be superficially similar to PICO-8, it’s different. But since it’s open source, it’s free baby!

Key Features

Cost: Free.
Proprietary?: No, Open-Source.
Video Resolution: 192×128.
Palette: Fixed at runtime, 16-colour.
Language: Lua.
Export formats: *.lk12 cartridges (LIKO-12 files).
Built-in features: Code editor, Sprite Editor, Tilemap Editor, GIF Recorder.
Platforms: Windows, Linux, Mac, Android, iOS & Raspberry Pi through LÖVE2D.

PixelVision8

PixelVision8 attempts to be all things really. Rather than defining itself as a fantasy console with specific, fixed limitations, the PixelVision 8 lets developers define the limitations they want to work within.
The PixelVision 8 currently offers a choice of four templates, each based on an existing 8-bit console like Sega Master System and NES. However, users aren’t restricted to these templates and they can change and expand the limitations as needed using the built-in development tools, which is a neat idea. The Pixel Vision 8 is still in early development right now, but it’s definitely one to watch.

Cost: $10.00 USD during early access. Will have free and pro versions after beta release.
Proprietary?: Open-Source API, proprietary official tools.
Video Resolution: Various, depending on settings & templates used.
Palette: Customizable during development, fixed at runtime .
Language: Lua.
Export formats: *.pv* files (Pixel Vision 8 files), other formats coming soon.
Built-in features: System Templates (NES/Famicom, Sega Master System, Game Boy, Sega Game Gear), Graphical File Browser, Display Configuration Tool, Sprite Editor, Tilemap Editor, SFX Editor, Music Editor.
Platforms: Windows, Mac, Linux.

So get crackin’. Choose your comfy development environment and start cranking out those games. You can play my first effort here.

I’ll get back to Linux stuff soon :-D I’m hoping the next version of PICO-8 has native binary support so I can create Linux versions for everybody.

, , , , , , , , , ,

Speeding Up Performance on a RetroPie Setup

I was desperately trying to get hold of a NES Mini Classic console before Christmas but because Nintendo are hopeless they were out of stock except from third-party sellers on Amazon and eBay selling with with 1000% markup on price. I kinda wanted one because it would look neat next to my TV and also I could introduce the “good old days” to my little boy. But no, they were sold out everywhere, naturally.
Then I thought, “what the hell am I doing?” and bought a Raspberry Pi 3 from Amazon and downloaded RetroPie to an SD card. RetroPie is the regular Raspbian/Pixel Linux OS for Raspberry Pi with a huge number of retro emulators built in, all wrapped in a nice GUI. All you have to do is setup your USB controller mappings and point it at some game ROM files and presto…your very own retro games console.
The benefits were obvious to me – you can play nearly every game from for any system prior to 1995, including NES, arcades, Sega Megadrive (Genesis in the colonies), Master-System, PC Engine, Neo Geo, SNES etc, you can choose your controller (I went for PS3-alike controllers) myself) and it’s very customisable with menus and themes and such. You can even (with a little programming magic) add things like background music to the menus and things like that.
There are *plenty* of guides on how to setup RetroPie on a Raspberry Pi, like here, here and here so I won’t repeat the information here. Suffice to say, it’s pretty easy once you’ve imaged the SD card properly.
The problem I was having is that even on a Raspberry Pi 3, some of the emulators (like the SNES one for some reason) would be very slow with lagging sound and sinking frame-rates. Some emulators, of course, like the Sega Dreamcast and N64 emulators won’t run well on even a Pi 3 – they simply require too much power to run well on any Pi – but the SNES one should run fine.
I investigated overclocking as a potential solution, but the Pi 3 doesn’t officially support overclocking yet although it can be done (use a heatsink!). The menu option which used to be there to do so for the Pi 1 & 2 is missing on the Pi 3. The most likely cause is a configuration issue. So I tried various things in the /opt/retropie/configs/all/retroarch.cfg config file like trying various fixes for sound issues and a few other tips and tricks to try to speed up emulation with varying degrees of success depending on the emulator.
The problem ended up being the “scaling governor” for the CPU on the Pi which is somehow stuck to powersave mode meaning that it was taking too long to process. The fix is to set each CPU core scaling governor to “ondemand” rather than “powersave”. You can do this by adding the following lines to your “/etc/rc.local” file which (if you know your Linux) is run at startup.

echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo ondemand > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo ondemand > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo ondemand > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor

This fixed all my Raspberry Pi 3 emulation performance issues, at least with emulators that technically should run well on the Raspberry Pi 3. I hope you found this useful. If you are still experiencing performance issues, especially on a Pi3, try some of the other fixes above. Overclocking is an option but should probably be a last resort.

, , ,
[Top]

The Node is now HTTPS only

, ,
[Top]

How I Automated My ExpressVPN Connection Under Ubuntu

I’ve just signed up for a VPN service. I decided to go with ExpressVPN as they have clients for pretty much every device/OS out there and also support OpenVPN. They’re not the cheapest my any means but their Twitter feed shows they’re passionate for digital privacy so up I signed.

Unlike the Windows client, the Linux client is a command line tool, which is actually great for Linux folks because you can then script it. Once you’ve activated your client on the machine you want to use it on, the usage is very simple: –

expressvpn connect smart         <-- Connects to the "smart" location which offers best speed for your country location. 
expressvpn disconnect            <-- Disconnects the VPN, obviously :-)
expressvpn refresh               <-- Refreshes the list of VPN hosts in various countries
expressvpn list                  <-- List current hosts in various countries
expressvpn connect {alias}       <-- Connection to a specific country from the list
expressvpn status                <-- Shows if VPN connected and if so, which country.

If you run “expressvpn list” you’ll get a list similar to this: –

ALIAS	COUNTRY				LOCATION			RECOMMENDED
-----	---------------			------------------------------	-----------
smart	Smart Location			UK - London			Y
smi					USA - Miami			
usta1					USA - Tampa - 1			
uslv					USA - Las Vegas			
usla3					USA - Los Angeles - 3		
frpa1	France (FR)			France - Paris - 1		Y
frpa2					France - Paris - 2		
defr1	Germany (DE)			Germany - Frankfurt - 1		Y
defr2					Germany - Frankfurt - 2		
dedu					Germany - Dusseldorf		
deda					Germany - Darmstadt		
ch1	Switzerland (CH)		Switzerland			Y
itmi	Italy (IT)			Italy - Milan			Y
itco					Italy - Cosenza			
es1	Spain (ES)			Spain				Y
camo2	Canada (CA)			Canada - Montreal - 2		
cato					Canada - Toronto		
cato2					Canada - Toronto - 2		
cava					Canada - Vancouver		
se1	Sweden (SE)			Sweden				
ie1	Ireland (IE)			Ireland				
is1	Iceland (IS)			Iceland				
no1	Norway (NO)			Norway				
dk1	Denmark (DK)			Denmark				
be1	Belgium (BE)			Belgium				
fi1	Finland (FI)			Finland			

When you list the countries that you can connect to, there is a “Y” next to some of the items. This means these countries/locations are “recommended” for best speed. They differ from the “smart” connection option in that the smart connection will always be the country you reside in, although a different IP address.
I thought it would be good so that when I login to my Ubuntu machine, it connects to a randomly selected VPN country which has the “recommended” “Y” next to it. Since I don’t use Unity under Ubuntu but Cinnamon (much better, IMHO) I’d also like to display the country I’m connected to as a Desklet in my top desktop panel.

Here is the bash script which does this on boot.

#!/bin/bash
expressvpn refresh
expressvpn list | tail -n+3 | column -ts $'t' | sed 's/   */:/g' > data.tmp
cat data.tmp | while read LINE
do
        if [ `echo $LINE | tail -c 2` == 'Y' ]; then
                TEMP=`echo $LINE | grep : | awk -F: '{print $1}'`
                echo "$TEMP" >> array.tmp
        fi
done
ARRAY_SIZE=`wc -l array.tmp | awk {'print $1'}`
RND_VPN=`shuf -i 1-$ARRAY_SIZE -n 1`
VPN=`sed "${RND_VPN}q;d" array.tmp`
rm -f data.tmp
rm -f array.tmp
expressvpn connect $VPN

It’s pretty easy to understand. First, I refresh the ExpressVPN country/location list. Then I list each location into a colon separated file called data.tmp. Next, I iterate through each line in that file and add each country which has a “recommended” “Y” next to it and put those in a file called array.tmp. Next, I randomly choose a country location from that list using the shuf command and extract the alias code into a variable called RND_VPN. This is what the ExpressVPN client uses in conjunction with the connect parameter.

The last bit is showing this information in Cinnamon as an applet. I don’t know much about writing JavaScript applets apart from they’re written in JavaScript as the Cinnamon documentation for this is super-weak, but I found a simple network status applet which is basically a refreshing label and that will do. All this does is put the output from the “expressvpn status” command into a label and displays it. Simple applets are composed from two files, the JavaScript JS file and a JSON descriptor file. Here are my horrible hacked together versions.

applet.js

const Applet = imports.ui.applet;
const Lang = imports.lang;
const Mainloop = imports.mainloop;
const GLib = imports.gi.GLib;
const UUID = "expressvpn@jon";
const REFRESH_INTERVAL = 10

function log(message) {
    global.log(UUID + '#' + log.caller.name + ': ' + message);
}

function logError(error) {
    global.logError(UUID + '#' + logError.caller.name + ': ' + error);
}

function MyApplet(orientation) {
    this._init(orientation);
}

MyApplet.prototype = {
    __proto__: Applet.TextApplet.prototype,
    _init: function (orientation) {
        Applet.TextIconApplet.prototype._init.call(this, orientation);
        try {
            this.set_applet_icon_name("emblem-web");
            this.set_applet_tooltip(_("ExpressVPN Node location."));
            this.set_applet_label("...");
        }
        catch (error) {
            logError(error);
        }
        this.refreshLocation();
},

    refreshLocation: function refreshLocation() {
        let [res, out] = GLib.spawn_command_line_sync(" expressvpn status ")
        let out = String(out).replace("n", "");
        let out = String(out).trim();
        //let out = String(out).replace(/ /g, "-");
        //let out = String(out).replace(/   /g, " ");
        this.set_applet_label('VPN: ' + out + '');
        Mainloop.timeout_add_seconds(REFRESH_INTERVAL, Lang.bind(this, function refreshTimeout() {
            this.refreshLocation();
        }));
    }
};

function main(metadata, orientation) {
    let myapplet = new MyApplet(orientation);
    return myapplet;
}

metadata.json

{
    "dangerous": true,
    "description": "Shows ExpressVPN Location",
    "name": "ExpressVPNStatus",
    "uuid": "expressvpn@jon"
}

Both these files go in the directory “$HOME/.local/share/cinnamon/applets/expressvpn@jon” as the directory is “applet_name@author” format. You can then add the applet as normal in Cinnamon to a panel.

I suspect something similar could be done with other VPN clients if you wanted :-)

, , , , ,
[Top]

Create an Encrypted Disk Partition or FIle Container With Ubuntu

It can sometimes be useful to have an encrypted partition to keep your stuff secret from prying eyes. Most tutorials will tell you to encrypt your home partition and have it open at boot time. But this means that the encrypted partition is un-encrypted as long as your machine is switched on and you are logged in.
Here, I’ll show you how to set up a free partition or disk as an “on demand” encrypted partition. Meaning that we’ll write a couple of shell scripts to open and close the protected partition so that the partition is only open when you need it. It also has the added benefit on being mostly invisible to a cursory glance at the system. Unless somebody thinks to check the contents of /dev, your encrypted partition will not be visible because it’s not always mounted.

I will explain how to encrypt your partition using Linux Unified Key Setup (LUKS) on Ubuntu Linux. Ubuntu uses the AES-256 cipher to encrypt the disk volume, so it’s pretty secure. LUKS is the basic standard for disk encryption for Linux and is available for all Linux systems running a 2.6+ kernel…which these days is pretty much every system.

For this, you will either need a second hard disk you want to use as an encrypted drive – meaning the partition takes up the whole disk or an empty partition on your primary disk, assuming you didn’t use the entire disk when you set up your Ubuntu system. If you don’t have a spare disk or partition available, don’t worry – LUKS is flexible enough that you can use a file container to act as an encrypted storage area. If you’re using a partition, I’ll assume it’s called /dev/sdb1 (first primary partition on the second hard disk). Yours can be anything, obviously.

First you will need cryptsetup, a utility for setting up encrypted file-systems using Device Mapper and the dm-crypt kernel module. You can install it with: –

sudo apt-get install cryptsetup

Once this is installed, you can use it to set up the initial partition. Remember, we’re using /dev/sdb1 (first partition on the second disk) but you can use anything.

sudo cryptsetup -y -v luksFormat /dev/sdb1

This will warn you that this will wipe the data on the partition, so make sure you’ve passed it the right one :-) Here is also where you’ll set the password/passphrase for the partition, so don’t forget it!

WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.
 
Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:
Command successful.

Okay, now the partition needs to be opened by the encryption manager. Do this with: –

sudo cryptsetup luksOpen /dev/sdb1 encrypted

You will be prompted for your password you created when you created the encrypted partition. The “encrypted” string here is arbitrary – it’ll just used as a label for the device mapper. You can call this anything. I’ve called mine “encrypted” for clarity. You’ll see this if you issue a “ls /dev/mapper” command. The new encrypted partiton is listed as “/dev/mapper/encrypted”.

Next, we’re going to make sure the partition is securely clean by using the dd command to zero out all the bits.

sudo dd if=/dev/zero of=/dev/mapper/encrypted bs=128M

The partition is now available but it still needs to be formatted with the normal file-system tools in order to be usable. You can format it in the usual manner.

sudo mkfs.ext4 /dev/mapper/encrypted

Now you can start using it! Here is the shell script to open the partition. I’m mounting my encrypted partition on $HOME/encrypted but you can change that part of the script to anything you like.

encrypted-open.sh

#!/bin/bash
sudo cryptsetup luksOpen /dev/sdb1 encrypted
# Comment out the above and uncomment the below if
# you want to use a file container instead of a partition
# See further down for file containers example.
# sudo cryptsetup luksOpen $HOME/encrypted-container encrypted
sudo mount /dev/mapper/encrypted $HOME/encrypted
df -h | grep encrypted
mount | grep encrypted
echo "ENCRYPTED PARTITION ACTIVE ON $HOME/encrypted"

and the shell script to close the partition again…

encrypted-close.sh

#!/bin/bash
sudo umount $HOME/encrypted
sudo cryptsetup luksClose encrypted
df -h
echo "ENCRYPTED PARTITON DEACTIVATED."

That’s it. Now here’s how to create a file container instead of a partition instead if you don’t have any easy way to create or use another partition. To store our encrypted data this way, we’ll need to create a file which will act as our storage device instead of the partition.

We want to create an empty file, but we can’t have that be a sparse “empty” file, because it doesn’t actually allocate the full file size when it is created which is no good for us. There are two ways to do this.

The easiest and the quickest way of going about this operation is with the “fallocate” command. This instantly allocates the amount of disk you would like for a file and assigns it the filename you give it. For this example we’ll create a 2GB file for our encrypted storage container.

fallocate -l 2G $HOME/encrypted-container

So this will create a 2GB file under your $HOME directory called “encrypted-container”. This disadvantage of this method is that it will not overwrite whatever old, deleted data that used to be used by those blocks with zeros or random data. This probably will not be desirable for your purposes because we don’t want people to be able to tell which portion of the file has encrypted data written to it.

A better but much slower way is to use the “dd” command in a similar manner as we used on the partition above. It will overwrite the blocks with zeroes at the file container location, giving us a zero’ed file of the proper size we want – in this case 2GB (count=2000).

sudo dd if=/dev/zero of=$HOME/encrypted-container bs=1M count=2000

Now we can use this file container and set up LUKS for it.

sudo cryptsetup -y luksFormat $HOME/encrypted-container

As with setting up a partition in the same way, you’ll be warned that it will wipe the data in the file and ask you to set the password/passphrase for the encrypted container.

Once this is done, you can check out the file and see that it’s now a LUKS encrypted file.

file $HOME/encrypted-container
encrypted-container: LUKS encrypted file, ver 1 [aes, cbc-essiv:sha256, sha1] UUID: 0451db36-3423-4ee1-8e2e-cc34c49e05f3

Now you can open the encrypted container using your password.

sudo cryptsetup luksOpen $HOME/encrypted-container encrypted

This is much the same as the process for opening a partition above. We’re using the same mapping name of “encrypted”.

As before the container still needs to be formatted as a regular EXT4 “partition” (even though it’s a file). Do so with: –

sudo mkfs.ext4 -j /dev/mapper/encrypted

Now you can use the same shell scripts as listed above as long as you change the parameters passed to the luksOpen command from the partition to the path to the file container. the rest is the same because really the only difference is how the /dev/mapper/encrypted device is created via luksOpen. Once you have that, be it from a file container or a partition, the rest of the process is identical.

, , , ,
[Top]

Repair a broken Ext4 Partition Superblock

In Linux, the entire disk space of a partition is subdivided into multiple file system blocks. The blocks are used for two different purposes. Most blocks stores user data or normal files. Some blocks in every file system store the file-system’s metadata. Metadata describes the structure of the file system. The most common metadata structures are superblocks, inodes and directories. Each file-system has a superblock, which contains information about the file-system such as file-system type (ext2, ext4, etc), size of the partition and it’s mount status amongst other things. If this information is lost, you are in trouble (data loss!) so Linux maintains multiple redundant copies of the superblock in every file system. This is very important in many emergency situations, for example you can use backup copies to restore damaged primary superblocks.

For this example, let’s assume your secondary drive’s first partition is corrupt (/dev/sdb1). If your primary root file-system is corrupt, you’ll need to boot your system from a live DVD/CD and repair it from the live OS using the root user account or “sudo [command]” on Ubuntu.

So if you see an error like the below when attempting to mount a file-system: –

/dev/sdb1: Input/output error
mount: /dev/sdb1: can’t read superblock

…your superblock is corrupt and the partition file-system is not accessible. You can restore the superblock from a backup but unless you’ve checked obvious things like SATA cables, your hard disk is probably on the way out and should be replaced as soon as possible, even if you restore the superblock from a backup on the partition.

Anyway, first make sure your partition is UNMOUNTED (umount /mountpoint). I cannot stress this enough. If you attempt to fix the partition whilst it is mounted, you will corrupt the drive even further.

You can try to run an initial file-system check using the “fsck” command.

fsck.ext4 -v /dev/sdb1

This will probably return something like: –

fsck /dev/sdb1
fsck 1.41.4 (27-Jan-2009)
e2fsck 1.41.4 (27-Jan-2009)
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb1
 
The superblock could not be read or does not describe a correct ext4
filesystem.  If the device is valid and it really contains an ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>

Next, recover the list of backup superblocks from the partition like so: –

dumpe2fs /dev/sdb1 | grep superblock

This will produce a list of alternate superblocks you can use.

Primary superblock at 0, Group descriptors at 1-6
Backup superblock at 32768, Group descriptors at 32769-32774
Backup superblock at 98304, Group descriptors at 98305-98310
Backup superblock at 163840, Group descriptors at 163841-163846
Backup superblock at 229376, Group descriptors at 229377-229382
Backup superblock at 294912, Group descriptors at 294913-294918
Backup superblock at 819200, Group descriptors at 819201-819206
Backup superblock at 884736, Group descriptors at 884737-884742
Backup superblock at 1605632, Group descriptors at 1605633-1605638
Backup superblock at 2654208, Group descriptors at 2654209-2654214
Backup superblock at 4096000, Group descriptors at 4096001-4096006
Backup superblock at 7962624, Group descriptors at 7962625-7962630
Backup superblock at 11239424, Group descriptors at 11239425-11239430
Backup superblock at 20480000, Group descriptors at 20480001-20480006
Backup superblock at 23887872, Group descriptors at 23887873-23887878

Now you can use a alternate superblock and attempt to repair the file-system.

fsck -y -b 32768 /dev/sdb1

This will produce output similar to the below: –

fsck 1.40.2 (12-Jul-2007)
e2fsck 1.40.2 (12-Jul-2007)
/dev/sdb1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #241 (32254, counted=32253).
Fix? yes
Free blocks count wrong for group #362 (32254, counted=32248).
Fix? yes
Free blocks count wrong for group #368 (32254, counted=27774).
Fix? yes
..........
/dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb1: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks

You should now be able to mount the file-system as normal (or reboot if it’s the primary root file-system): –

mount /dev/sdb1 $HOME/mount

Here, I’m mounting the file-system on the mount subdirectory of my user’s (/root in this case) home directory. If this doesn’t work, run through the fsck command above trying each backup superblock number in turn until you find one that works. Once you can successfully mount the file-system at a directory mount point, you can access your files.

Now would be the time to backup those files before the disk fails completely. Sometimes superblocks get corrupted and the disk will be fine for a while longer, but I take no chances :-)

, , , ,
[Top]

How To Fix Ubuntu Ethernet Connectivity After Update

Sometime last weekend after doing a regular Ubuntu update, I lost my network connectivity. After doing an “ifconfig”, I noticed I had no assigned IP address from my router. Wonderful. Here’s how I fixed it – it’s actually very simple.
I’m on Ubuntu 14.04 LTS at present (at least until the 16.04 LTS point release becomes available).

First, make sure that your ethernet card is actually still recognised by the system, as in it’s still “seen” by the operating system. I list my device settings for eth0 (my NIC card).

ethtool eth0

This will give you something back like the following. Basically, if you get output back, your card is recognised by the system. If you get no output back, probably your card is broken. I’ve never seen a working ethernet device not return any data with this command, regardless if it’s been configured or not.

Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: d
        Wake-on: d
        Link detected: yes

So, your card is still recognised by Ubuntu, just not configured. In my case, an update had somehow deleted the network settings for this device. This is no problem, you can recreate them. Edit the following file : –

sudo vi /etc/network/interfaces

You’ll probably see something like the following: –

auto lo
iface lo inet loopback

This means that only the loopback (127.0.0.1) network interface is configured. This was the problem in my case too. While I assign a static IP address to my Ubuntu machine, it’s static on the router, not the device itself. Meaning that Ubuntu is using DHCP on the network but my router always hands it the same IP address. If you’re using DHCP to get your router to assign the IP address, the settings you need to add are: –

auto eth0 
iface eth0 inet dhcp 

…for device eth0. If you’re using a “proper” static IP address, then the settings will be something like: –

iface eth0 inet static
address 192.168.1.5
netmask 255.255.255.0
gateway 192.168.1.254

Change your “address”, netmask and gateway settings as appropriate. Save this file and restart networking with: –

sudo /etc/init.d/network restart

If you now type the “ifconfig” command, you should have networking again and be assigned an IP address.

, , , ,
[Top]