Project: Linux on solid state flash laptop


I want to create a new linux project. The laptop have a solit state flash drive with only 128mb ram.

The following must run:
– Debian\Ubuntu as main OS
– Xwindow manager (Flux box, Enlightenment, ICEWM or XFCE)
– PPTP VPN connections
– Remote desktop program for Windows RDP

The main problem of solid state flash is that you can write the disk not forever. So the log & temporary files must be run in the RAM. I found the following solution:


tmp     /tmp            tmpfs   noexec,nosuid,rw,size=1024K     0       0
vartmp  /var/tmp        tmpfs   noexec,nosuid,rw,size=1024K     0       0
varlog  /var/log        tmpfs   noexec,nosuid,rw,size=2048K     0       0


My Distro choice is DSL (damn small linux) because that is a very small distro including window manager.

The first big problem is that i have only a 64mb SSD disk. I must create 57mb for my “fragile” installation. A have 7 MB for my home & swap. I’ll have try it without swap but DSL was very unstable on my laptop.


I found a solution (DSL Create own CD(PDF)) to create a my own installation CD. When I re-mastering the distro I can add items. But soon I’ll try this manner to strip the distribution to a smaller one 🙂


When I remastered the DS, I seen one big KNOPPIX file. Now I must rebuild te KNOPPIX enviroment instead of the DSL. The KNOPPIX file is like a ISO. I must mount the file before i can edit these.

I found the following steps on the internet:

Uncompress the DSL-N *.iso and ‘unpack” the knoppix image

# mkdir /ramdisk/image
# mount /path-to-file/dsl-n-01RC4.iso /ramdisk/image -t iso9660 -o loop,ro
# mkdir /ramdisk/unpack
# mount /ramdisk/image/KNOPPIX/KNOPPIX /ramdisk/unpack -t iso9660 -o ro,loop=/dev/cloop50

Prepare a place to put the files for the re-mastered knoppix image

# mkdir /ramdisk/source
# mkdir /ramdisk/newcd
# mkdir /ramdisk/newcd/KNOPPIX
# cp -Rp /ramdisk/unpack/* /ramdisk/source
# cp -Rp /ramdisk/unpack/.bash_profile /ramdisk/source

Copy additional files to be added to the new knoppix image

# cp /path-to-file/file /ramdisk/source/path-to-file/file
# etc etc

“Pack” the new knoppix image

# mkisofs -R /ramdisk/source | create_compressed_fs – 65536 > ramdisk/newcd/KNOPPIX/KNOPPIX

I’ll try the mount and this works 🙂 So now I must find the big files and strip these. My following step is find a tool like KDirStat to find these files.

Ubuntu 30-Mount Check Annoyance

If you’ve used Ubuntu Linux for longer than a month, you’ve no doubt realized that every 30 times you boot up you are forced to run a filesystem check. This filesystem check is necessary in order to keep your filesystem healthy. Some people advise turning the check off completely, but that is generally not a recommended solution. Another solution is to increase the number of maximum mounts from 30 to some larger number like 100. That way it’s about 3 times less annoying. But this solution is also not recommended. Enter AutoFsck.

AutoFsck is a set of scripts that replaces the file system check script that comes shipped with Ubuntu. The difference is that AutoFsck doesn’t ruin your day if you are so unfortunate to encounter the 30th mount. The most important difference is that AutoFsck does its dirty work when you shut your computer down, not during boot when you need your computer the most!The 30th time you mount your filesystem, AutoFsck will wait until you shut down your computer. It will then ask you if it is convenient for you to check your filesystem. If it is convenient for you, then AutoFsck will restart your computer, automatically execute the filesystem check, and then immediately power down your system. If it is not convenient for you to check your filesystem at that moment, then AutoFsck will wait until the next time you shut down your computer to ask you again. Being prompted for a file system check during shutdown is infinitely more convenient than being forced to sit through a 15 minute check during boot up.

Official Site:

How to shrink a SQL Transaction Log

For 2005:
–* you can get the logical log file name usingthe following command in Query Analizer:

exec "databaseName".dbo.sp_helpfile

Now execute the following command to shrink the database log to 200MB:

DBCC SHRINKFILE ("logicalLogFileName", 200)
DBCC SHRINKFILE ("logicalLogFileName", 200)

–if it doesn’t work, run the two commands again.

–When done with that, do a full backup of your db as you will have broken your tlog backup chain.

For 2008+

USE databasename;
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE databasename
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (2, 1);  -- here 2 is the file ID for trasaction log file,you can also mention the log file name (dbname_log)
-- Reset the database recovery model.
ALTER DATABASE databasename

MCSA 2003 70-292 Tips & Trics

Tips & Weetjes MCSA 2003:

1. De PDC emulator (FSMO role) is verantwoordelijk voor de tijd sync. De handig weetje is dat je met het w32tm /config /syncfromflags:manual /manualpeerlist time_source de tijd kan syncen.
2. Voor de meest optimale werking van een Windows machine moet er minstends 5% vrije RAM zijn. Voor 512 mb komt dat dus op minstends 25.6.
3. Perfmon: Memory – Pages/Sec mag niet hoger zijn dan 5 anders is het een memory probleem
4. Perfmon: Logical Disk – Avg Disk Queue Length mag niet hoger zijn dan het aantal 2 + het aantal spindles in het systeem. Voor 1 schijf is dat dus 1 (schijf) + 2 (standaard) = 3 (max)
5. Private houdt in beveiligde verbinding in
6. WPAD = Web Proxy Auto Detect
7. 80/20 Rule voor een segment houdt in:
DHCP-A (80%) Range Excluded adresses

DHCP-B (20%) Range Exclude adresses
8. Gereserveerde DHCP netwerk adressen zijn altijd met kleine letters en zonder tussenvoegsels
9. System monitor:

Hardware Status Explenation Accepting Valeus
Memory : Pages/Sec pages between disk and ram 30 low
Memory: Page Faults/Sec OS not found page in RAM 40 old pc´s 150 new pc´s
Disk: Time
Disk Avg Disk Queue Length mag niet hoger zijn dan het aantal 2 + het aantal spindles in het              systeem
Processor: Time greater than 85 not always a problem
Processor Queue Length greater than 2 * cpu´s = too much

Exchange 2003 Disaster recovery

With a good backup in hand and Exchange databases and logfiles on different hard drives, it is no problem to recover from an Exchange disaster.Just restore the data from backup and initiate a roll forward of the transaction logs. Well done, the Exchange information store goes online.

But what should you do when your backup isn’t readable or you don’t have a backup? Here’s how these tools come to play.

Before you start:

  • Make sure that the databases are really not startable
  • Check the Application log for Exchange events that can tell you the cause of the failure
  • Make a backup of the database
  • Restart the server so that a soft recovery can be done

ESEUTIL /P parameters

ESEUTIL /p repairs a corrupted or damaged database. Ensure that you have a minimum of 20% free disc capacity in association to the Exchange database size.

Figure 9: ESEUTIL repair modus


ESEUTIL /P „c:\program files\exchsrvr\mdbdata\priv1.edb“ /Se:\exchsrvr\mdbdata\priv1.stm /Te:\tempdb.edb

This command will repair the database PRIV1.EDB. If you have no .STM file, you can create one with ESEUTIL /CREATESTM. Read more about this here.

After running ESEUTIL, you can open a detailled logfile called >database<.integ.raw to see the results.

As a last Step run ISINTEG –fix -test alltests. You can read more about ISINTEG later in this article.

Note: Sometimes you must run the fix over and over again till it fix all problems. Its like a defrag of a harddrive