Fail of the day #2: SSD drives out of spec

A while back I got an SSD drive with the question whether it could be repaired. At first glance it looked fine. But wait… the PCB is not level. In fact, it is seriously wobbly. What on earth happened to this SSD??

That PCB should really not have that shape..

That PCB should really not have that shape.

Looking closer at the flash ICs, it turns out several of them have pins that have disconnected from the PCB. Ok – that’s tough but nothing some careful soldering cannot fix.

But after inspecting the rest of the PCB it is clear that some inductors and capacitors have been torn off too. Wow – this drive took a real beating – nothing here to be salvaged except maybe a crystal or inductor. Could possibly be useful in other projects. Well – it goes into the to-be-used-in-future-projects box.

 

That’s only half the story… Quite a while back I came across another pair of PCBs. Another SSD, in fact – broken in two. Probably on purpose to prevent data extraction from the drive, but it gives a good opportunity to have a closer look at what is inside an Intel SSD drive.

2014-01-01_22-51-10

No rescue possible here, no matter how good soldering skills..

All the passive components (capacitors, resistors) are so small they are impossible to hand solder – no point in salvaging them. Most of the other components are held in place with epoxy, making removal impossible (but I will for sure buy Intel SSDs from now on – these things are built to last!).

The PCB seems to have multiple layers. There is for sure at least a ground plane in there, probably 1-2 signal layers too (hard to count them without a microscope).

The one component that might be of interest in other projects (adding memory to OpenWRT based routers comes to mind) is the SDRAM, is a Samsung K4S281632I-UC60 8Mbyte x 16 IC.  On the other hand – hand soldering that one will be… difficult (understatement) and require rebuilding the OpenWRT kernel. Hmm.. Will probably just recycle it.

Advertisements

FAIL of the day #1: Repair of NiMH charger

2014-01-01_16-25-22

When buying some GP branded NiMH rechargeable batteries about a year ago, a “GP PowerBank Travel” charger (model GPPB03GS) was included as a promotion. It’s a nice little charger that runs both off 220 V and 12 V (for use in car, I presume). It can charge 1-4 AA batteries, or 1-2 AAA batteries, with additional trickle charging after full capacity has been reached.

The charger worked well for some months, but one day after charging some batteries overnight the LED blinked red, and when removing the batteries it was clear something had gone wrong. See the melted plastic? Not good.

As this was very much an el-cheapo charger, one should probably not expect much from it. But I was still curious about how the 220V was brought down to more useful voltage levels, and if the internals would live up to safety standards. I was actually quite surprised at how complex and well designed the internals were:

Looking closer, there are three distinct parts of the PCB:

– 220V section, which is shielded with plastic blast shields towards rest of the electronics – nice! This is a classic Switched Mode Power Supply (=SMPS), with an optocoupler feedback loop. Looks like a SMD type TL431 voltage reference – very common in SMPS designs.

– 12V section, with some protection diodes, filter caps etc – but no other major components.

– Charger circuit, using an unknown controller IC. The markings have been shaved off. I just don’t understand why they go through the trouble of doing that… It’s not like this is some super classified product where the design should be kept secret at any cost.

A closer look at the components and PCB around where the plastic had melted does not give any clues of what has gone wrong – in fact nothing visible anywhere on the PCB indicates a catastrophic failure of the charger.

Bringing out the multimeter and measuring the output from both SMPS and 12V section shows that those voltages are all good – most likely the problem is instead in the unknown charging controller IC, or possibly some of the tiny SMD FETs, diodes etc that complement the charging IC.

So… given that this kind of charger cost next to nothing these days, I’ll leave it for dead for now. The SMPS works as it should, so maybe that part can be reused in some other project – I’ll stash it in the “possible-future-use” parts bin.

Using CoffeeScript to create QlikView extensions

I am not a Javascript fan. To be honest, I find it rather abusive. There are probably lots of theoretically sound reasons for designing the Javascript (=JS) syntax the way it is – but that does not make it more readable or easy to learn.

I am however a QlikView (=QV) fan. Or rather: QV is a very powerful data visualisation, exploration and discovery tool, but the main drawbacks is its somewhat dated visualisation options. In a world used to fancy angular.js based dynamic web sites and great looking HighCharts graphs, QV’s visualisation options aren’t quite up there.

QV however has a rather interesting “extension” mechanism. You can create object extensions that add new visualisations to QV, using JS to develop the extensions. Document extensions are used to modify the deeper levels of QV applications – very useful and cool (someone created a document extensions using the accelerometers in iPhones to enable new ways to interacts with QV applications – pretty cool!) but not the focus of this post.

So, we want to create QlikView object extensions. JS is the mandated language. Ouch. We can however use CoffeeScript to remove the bad parts of JS, making the code base smaller and more easy to read and maintain. CoffeeScript is very cool, lots of testimonies to it’s greatness out there (DropBox is using CoffeeScript these days, and have shared some experiences).

Note: You need to install node.js before CoffeeScript. I’ve tried this on both Windows 8.1 and OS X – compiling CoffeeScript works without problems on both platforms.

Tuns out this works quite well. Brian Munz of QlikTech has created a couple of very nice templates to make it easier to create extensions, I’ve taken the liberty of converting one of them to CoffeeScript, to show how easy it is to convert JS to CoffeeScript (and make my own future extension development easier).

The CoffeeScript code can also be found in my repo at GitHub.

The Javascript version first:


var template_path = Qva.Remote + "?public=only&name=Extensions/template_simple/";
function extension_Init()
{
  // Use QlikView's method of loading other files needed by an extension. These files should be added to your extension .zip file (.qar)
  if (typeof jQuery == 'undefined') {
    Qva.LoadScript(template_path + 'jquery.js', extension_Done);
  }
  else {
    extension_Done();
  }
}

function extension_Done(){
  //Add extension
  Qva.AddExtension('template_simple', function(){
    //Load a CSS style sheet
    Qva.LoadCSS(template_path + "style.css");
    var _this = this;
    //add a unique name to the extension in order to prevent conflicts with other extensions.
    //basically, take the object ID and add it to a DIV
    var divName = _this.Layout.ObjectId.replace("\\", "_");
    if(_this.Element.children.length == 0) {//if this div doesn't already exist, create a unique div with the divName
      var ui = document.createElement("div");
      ui.setAttribute("id", divName);
      _this.Element.appendChild(ui);
    } else {
      //if it does exist, empty the div so we can fill it again
      $("#" + divName).empty();
    }

    //create a variable to put the html into
    var html = "";
    //set a variable to the dataset to make things easier
    var td = _this.Data;
    //loop through the data set and add the values to the html variable
    for(var rowIx = 0; rowIx < td.Rows.length; rowIx++) {
      //set the current row to a variable
      var row = td.Rows[rowIx];
      //get the value of the first item in the dataset row
      var val1 = row[0].text;
      //get the value of the second item in the dataset row
      var m = row[1].text;
      //add those values to the html variable
      html += "value 1: " + val1 + " expression value: " + m + "<br />";
    }
    //insert the html from the html variable into the extension.
    $("#" + divName).html(html);
  });
}

//Initiate extension
extension_Init();

Now the CoffeeScript version. A lot more readable, at least to me:

template_path = Qva.Remote + "?public=only&name=Extensions/template_simple_coffeescript/"

extension_Init = ->
  # Use QlikView's method of loading other files needed by an extension. These files should be added to your extension .zip file (.qar)
  if typeof jQuery == 'undefined'
    Qva.LoadScript(template_path + 'jquery.js', extension_Done)
  else
    extension_Done()

extension_Done = ->
  # Add extension
  Qva.AddExtension('template_simple_coffeescript', ->
    _this = this

    # add a unique name to the extension in order to prevent conflicts with other extensions.
    # basically, take the object ID and add it to a DIV
    divName = _this.Layout.ObjectId.replace("\\", "_")
    if _this.Element.children.length == 0
      # if this div doesn't already exist, create a unique div with the divName
      ui = document.createElement("div")
      ui.setAttribute("id", divName)
      _this.Element.appendChild(ui)
    else
      # if it does exist, empty the div so we can fill it again
      $("#" + divName).empty()

    # create a variable to put the html into
    html = ""

    # set a variable to the dataset to make things easier
    td = _this.Data

    # loop through the data set and add the values to the html variable
    for rowIx in [0..(td.Rows.length-1)]

    # set the current row to a variable
    row = td.Rows[rowIx]

    # get the value of the first item in the dataset row
    val1 = row[0].text

    # get the value of the second item in the dataset row
    m = row[1].text

    # add those values to the html variable
    html += "value 1: " + val1 + " expression value: " + m + "<br />"

    # insert the html from the html variable into the extension.
    $("#" + divName).html(html)
  )

# Initiate extension
@extension_Init()

Note that you need to include the –bare option when compiling the CoffeeScript code:


coffee --bare --compile Script.coffee

This will give us the following Javascript file, which is functionally equivalent to the first JS file above:

// Generated by CoffeeScript 1.6.3
var extension_Done, extension_Init, template_path;

template_path = Qva.Remote + "?public=only&name=Extensions/template_simple_coffeescript/";

extension_Init = function() {
  if (typeof jQuery === 'undefined') {
    return Qva.LoadScript(template_path + 'jquery.js', extension_Done);
  } else {
    return extension_Done();
  }
};

extension_Done = function() {
  return Qva.AddExtension('template_simple_coffeescript', function() {
    var divName, html, m, row, rowIx, td, ui, val1, _i, _ref, _this;
    _this = this;
    divName = _this.Layout.ObjectId.replace("\\", "_");
    if (_this.Element.children.length === 0) {
      ui = document.createElement("div");
      ui.setAttribute("id", divName);
      _this.Element.appendChild(ui);
    } else {
      $("#" + divName).empty();
    }

    html = "";
    td = _this.Data;
    for (rowIx = _i = 0, _ref = td.Rows.length - 1; 0 <= _ref ? _i <= _ref : _i >= _ref; rowIx = 0 <= _ref ? ++_i : --_i) {
      row = td.Rows[rowIx];
      val1 = row[0].text;
      m = row[1].text;
      html += "value 1: " + val1 + " expression value: " + m + "<br />";
    }
    return $("#" + divName).html(html);
  });
};

this.extension_Init();

Burning ISOs to USB sticks on Mac / OS X

For some reason i cannot get the easy-to-use tools out there for burning ISOs to work… Command line to the rescue:

First, make sure Homebrew is installed. It is strictly not needed for the burning-to-thumb-drive process, but will enable the progress indicators, which are quite nice to have for long running tasks. Now install Pipe Viewer from Homebrew:


$ brew install pv

Now we need to figure out the device name of our USB drive. In a terminal window (you are using iTerm2 – right? Infinitely better than OS X built in Terminal app):


$ diskutil list

#: TYPE NAME SIZE IDENTIFIER
 0: GUID_partition_scheme *251.0 GB disk0
 1: EFI EFI 209.7 MB disk0s1
 2: Apple_HFS Macintosh HD 250.1 GB disk0s2
 3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
 #: TYPE NAME SIZE IDENTIFIER
 0: GUID_partition_scheme *320.1 GB disk1
 1: EFI EFI 209.7 MB disk1s1
 2: Apple_HFS SSD backup 180.0 GB disk1s2
 3: Apple_HFS Temp 139.6 GB disk1s3
/dev/disk2
 #: TYPE NAME SIZE IDENTIFIER
 0: GUID_partition_scheme *1.0 TB disk2
 1: EFI EFI 209.7 MB disk2s1
 2: Apple_HFS Macken_Ext Backup 999.9 GB disk2s2
/dev/disk3
 #: TYPE NAME SIZE IDENTIFIER
 0: FDisk_partition_scheme *8.0 GB disk3
 1: DOS_FAT_32 WHEEZY 8.0 GB disk3s1
$

/dev/disk3 is the USB thumb drive. I previously had another Wheezy image on it, thus its name.

Now unmount it:


$ diskutil unmountDisk /dev/disk3
Unmount of all volumes on disk3 was successful
$

Nice. Now let’s write the ISO to the drive:


$ pv -petr ~/Desktop/debian-7.2.0-amd64-DVD-1.iso | sudo dd of=/dev/disk3 bs=128k
Password:
0:00:38 [4.94MiB/s] [====>                  ] 3% ETA 0:16:55

Now let’s wait. Looks like it will take approximately another 17 minutes..

When done, just eject the thumb drive as usual, remove it and you have a bootable Debian install drive. Mission accomplished.

Netgear RN312 firmware upgrade 6.0.8 to 6.1.2

RN312 6_1_2 available

Seems Netgear just released firmware version 6.1.2 for those products that support the new UI (I believe UltraNAS and other non-Intel based devices does not get the benefits of the new version 6 and above firmware – or maybe they have reconsidered – not sure).

Updating is always a bit scary when you have a smoothly running system, but after reading the release notes they mainly covered more high end devices (as compared to the RN312 that I have), so why not..

I believe the upgrade went well, all the applications I have installed myself (CrashPlan, Monitorix etc) seems to be work ok.

RN312 6_1_2 installed.png

 

 

 

Moving CrashPlan cache and log directories to new locations

As discussed in a previous post, the ReadyNAS might run out of disk space on the 4 GB root partition if you install software other than that provided by NetGear.

In my case it was CrashPlan’s cache and log files that were filling up the root partition, with warning emails every 10 minutes that 81% of the root partition was used, 82%… 83%…, so they needed a new home. Turns out it is not too hard:

ssh into the NAS, then su to become root. Stop CrashPlan (if it is running):

root@RN312:/home/admin# service crashplan stop
 Stopping CrashPlan Engine ... OK
root@RN312:/home/admin#

Make a copy of CrashPlan’s configuration file, in case something goes wrong:

root@RN312:/home/admin# cp /usr/local/crashplan/conf/my.service.xml /usr/local/crashplan/conf/my.service.xml.orig
root@RN312:/home/admin#

Take a look at CrashPlan’s cache directory:

root@RN312:/home/admin# ls -lah /usr/local/crashplan/cache/
 total 40M
 drwxr-sr-x 1 root staff  106 Sep 25 03:00 .
 drwxr-sr-x 1 root staff  258 Sep 25 21:31 ..
 drwxr-sr-x 1 root staff  170 Sep 25 21:31 42
 -rw-r--r-- 1 root staff 8.4K Sep 25 21:31 cpft1_42
 -rw-r--r-- 1 root staff 1.9K Sep 25 21:31 cpft1_42i
 -rw-r--r-- 1 root staff 2.1K Sep 25 21:31 cpft1_42x
 -rw-r--r-- 1 root staff  23M Sep 25 21:31 cpgft1
 -rw-r--r-- 1 root staff 8.8M Sep 25 21:31 cpgft1i
 -rw-r--r-- 1 root staff 7.9M Sep 25 21:31 cpgft1x
 -rw-r--r-- 1 root staff  986 Sep 25 03:02 cpss1
root@RN312:/home/admin#

Create cache directory in new location:

root@RN312:/home/admin# mkdir /home/admin/from_root/crashplan/cache

Change the config file to point to the new location (using your favourite editor, vim used here):

root@RN312:/home/admin# vim /usr/local/crashplan/conf/my.service.xml

Change
<cachePath>/usr/local/crashplan/cache</cachePath>
to
<cachePath>/home/admin/from_root/crashplan/cache</cachePath>

(Adjust as needed if you have selected some other place for the CrashPlan files.)

Now move the cache files:

root@RN312:/home/admin# mv /usr/local/crashplan/cache/* /home/admin/from_root/crashplan/cache/
root@RN312:/home/admin#

Time to move CrashPlan’s log files. They are originally stored in /usr/local/crashplan/log/, let’s move them to /home/admin/from_root/crashplan/log.

root@RN312:/home/admin# ls -lah /usr/local/crashplan/log/
 total 111M
 drwxrwxrwx 1 root staff  346 Sep 23 04:41 .
 drwxr-sr-x 1 root staff  258 Sep 25 21:31 ..
 -rw-r--r-- 1 root root   33K Sep 25 21:31 app.log
 -rw-r--r-- 1 root root   23M Sep 25 21:31 backup_files.log.0
 -rw-r--r-- 1 root root   26M Jul 12 19:50 backup_files.log.1
 -rw-rw-rw- 1 root root     0 Aug 15 15:21 engine_error.log
 -rw-r--r-- 1 root root  6.4K Sep 25 21:31 engine_output.log
 -rw-r--r-- 1 root root  180K Sep 25 21:31 history.log.0
 -rw-r--r-- 1 root root  501K Sep 17 13:47 history.log.1
 -rw-r--r-- 1 root root  501K Aug 25 08:10 history.log.2
 -rw-rw-rw- 1 root root     0 Aug 15 15:24 restore_files.log.0
 -rw-r--r-- 1 root root   13M Sep 25 21:31 service.log.0
 -rw-r--r-- 1 root root   26M Sep 23 04:41 service.log.1
 -rw-r--r-- 1 root root   26M Sep 17 14:35 service.log.2
root@RN312:/home/admin#
root@RN312:/home/admin# mkdir /home/admin/from_root/crashplan/log
root@RN312:/home/admin#

Find the fileHandler tags (there are 4 of them dealing with log files), modify them so they point to the new log directory. So, once again edit /usr/local/crashplan/conf/my.service.xml.orig, part of mine looks like this after moving the log files. Change the paths as neeed for your choice of new directories:

<serviceLog>
     <fileHandler append="true" count="2" level="ALL" limit="26214400" pattern="/home/admin/from_root/crashplan/log/service.log"/>
   </serviceLog>
   <serviceErrorInterval>3600000</serviceErrorInterval>
   <historyLog>
     <fileHandler append="true" count="10" level="ALL" limit="512000" pattern="/home/admin/from_root/crashplan/log/history.log"/>
   </historyLog>

Start CrashPlan again:

root@RN312:/home/admin# service crashplan start
 Stopping CrashPlan Engine ... OK
root@RN312:/home/admin#

And finally check free disk space on /:

root@RN312:/usr/local/crashplan/log# df -h
 Filesystem      Size  Used Avail Use% Mounted on
 rootfs          4.0G  1.7G  1.8G  49% /
 tmpfs            10M  4.0K   10M   1% /dev
 /dev/md0        4.0G  1.7G  1.8G  49% /
 tmpfs           2.0G     0  2.0G   0% /dev/shm
 tmpfs           2.0G  5.8M  2.0G   1% /run
 tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
 tmpfs           2.0G     0  2.0G   0% /media
 /dev/md127      2.8T  1.1T  1.7T  39% /data
 /dev/md127      2.8T  1.1T  1.7T  39% /home
 /dev/md127      2.8T  1.1T  1.7T  39% /apps
root@RN312:/usr/local/crashplan/log#

49% – nice!

Installing Debian on old ASUS motherboards

Having a couple of decommissioned ASUS motherboards (M2NPV-VM and A8N-VM CSM), as well as a 19″ cabinet with ATX cases in it, they could together be a setup for lab work, trying out Linux server stuff, as a test bed for network gear etc.

Installing Linux (Debian) is usually pretty easy, a couple of snags along the way though.
So, note to self: read this if these motherboards need to be reinstalled sometime. It will save you/myself some time.

Booting from USB flash disk

  1. The BIOS of both boards need to be changed so that the flash disk is 1st disk (before the SSD also installed), and also 1st in boot order. Otherwise it will not boot from the thumb drive.
  2. Install Debian as usual.
  3. Once you get to the GRUB installation part of Debian install, follow the default setting and install to first disk. Which is the flash thumb drive, I know. But trying the get the Debian installer to install GRUB anywhere else just failed consistently – I have no idea why. Should have worked to install it to /dev/sdb (which is the SSD).
  4. Reboot into recovery mode with the thumb drive still inserted (as GRUB was installed to it. remember?). You should now end up in a command line shell.
  5. Do a “grub-install /dev/sdb” to install GRUB to the SSD. The devices might be different depending on the installed hardware, check with “ls /dev”, “du” and related commands, to get the device name of the SSD
  6. Reboot, quickly remove the thumb drive during the reboot, and GRUB should now appear, served from the SSD.