Category Archives: Open-Source

Pengenalan Thin Client di Linux dengan LTSP

Saya baru sadar bahwa selama ini saya sudah banyak menulis artikel tentang LTSP di situs ini, namun belum menjelaskan secara lebih detil mengenai apa thin client dan LTSP (Linux Terminal Server Project) itu sendiri. Saya akan coba jelaskan sedikit di posting ini.


UPDATE : Instalasi LTSP sekarang sudah sangat mudah sekali, terutama di Ubuntu Linux.
Silakan tinggal mengikuti panduan ini : Ubuntu LTSP Quick Install


Thin client adalah jenis infrastruktur IT dimana client/workstation/desktop hanya menampilkan layar/output, dan tidak melakukan proses komputasi lainnya. Semua pekerjaan dilakukan di server. Karena itu client tidak membutuhkan komputer dengan spesifikasi yang “mewah”. Pentium II dengan memory 32 MB sudah lebih dari cukup, dan hard disk tidak diperlukan.
Arsitektur thin client kadang juga dikenal dengan istilah centralized atau server-based computing.

Contoh berbagai solusi thin client misalnya Windows Terminal Server, Citrix Metaframe, NX, dan, yang akan dibahas sekilas disini, LTSP.

Ada banyak kelebihan solusi berbasis thin client jika dibandingkan dengan desktop konvensional :

  1. Investasi hardware jauh lebih murah : Dimana biasanya untuk setiap staf baru kita perlu membelikan sebuah komputer Pentium IV dengan memory minimal 256 MB, dengan thin client maka kita cukup membelikan komputer bekas Pentium II dengan memory 32 MB — namun performanya tetap dapat menyamai Pentium IV
  2. Longer hardware lifecycle : selain investasi hardware lebih murah seperti yang telah disebut diatas, juga umur hardware menjadi lebih panjang. Dimana biasanya mungkin kita perlu meng upgrade komputer desktop setiap 3-4 tahun, dengan solusi thin client, maka komputer bisa digunakan sampai lebih dari 5 tahun dengan performa yang tetap sangat baik.
  3. Maintenance : Jauh lebih mudah, tidak mengganggu user, dan tidak memakan waktu. Dimana biasanya jika ada komputer rusak maka kita perlu waktu minimal satu hari (backup data user, install ulang komputer, restore data user). Maka, dengan thin client kita cukup mengganti komputer user dengan komputer Pentium II lainnya; dan user dapat kembali bekerja dalam waktu hitungan menit.
  4. Manajemen desktop : juga menjadi jauh lebih mudah – contoh: jika ada 100 desktop, maka kita perlu melakukan 100 kali instalasi seluruh software yang ada. Namun dengan solusi thin client, maka kita hanya perlu instalasi satu kali, dan 100 desktop otomatis akan mendapatkannya juga.

    Kita juga bisa mudah “mengunci” desktop client, sehingga mereka tidak bisa memasang software-software tanpa sepengetahuan kita — dimana ini adalah salah satu penyebab utama masuknya virus / spyware / trojan, dengan dampak susulan yang bisa sangat fatal bagi perusahaan.
  5. Upgrade mudah & murah : untuk meningkatkan kinerja seluruh desktop, seringkali dapat dilakukan cukup dengan upgrade memory di server dan/atau upgrade switch. Dibandingkan dengan desktop biasa, dimana jika ada 100 desktop maka total biaya upgrade dikalikan dengan 100 buah komputer, sangat mahal & tidak efisien.
  6. Keamanan data : karena semua data tersimpan di server, maka bisa lebih mudah kita amankan dari oknum staf (corporate espionage, internal hacker, dst). Desktop thin client juga bisa kita “kunci” sehingga semua fasilitas akses datanya (disket, USB, dll) tidak berfungsi (sehingga oknum staf tidak bisa mencuri data dari komputernya dan dibawa keluar perusahaan)

Nah, LTSP, sebagai salah satu solusi thin client, memiliki semua kelebihan yang tersebut diatas, dan masih ditambah lagi dengan :

  1. Bebas biaya lisensi : karena berlisensi GPL (open source). Bandingkan misalnya dengan solusi Windows Terminal Server, atau Citrix, yang bisa dengan mudah menembus angka ribuan atau puluhan ribu dolar.
  2. Fleksibel, mudah di upgrade : saya telah mengalami sendiri bagaimana mudahnya upgrade ke versi terbarunya; cukup install versi terbaru (yang akan terpasang di direktori yang berbeda dari versi sebelumnya), copy file-file konfigurasi yang lama — dan voila, selesai.
  3. Netral : apapun distro Linux yang anda gunakan, hampir bisa dipastikan bahwa LTSP bisa dipasang disitu.

Apakah LTSP itu sendiri ? Secara teknis, LTSP adalah satu set script yang memungkinkan kita menampilkan layar server di client, itu saja pada intinya. Tentu saja di dalamnya jauh lebih kompleks — ada fasilitas remote boot, remote file system, hardware auto detection, remote multimedia & output, dll.

Apakah ada kelemahan LTSP ? Tentu saja, tidak ada teknologi yang tidak mempunyai kelemahan. Sejauh ini ada beberapa, seperti penggunaan bandwidth yang agak lebih boros daripada Citrix (diperkirakan max. 50 client di satu segmen network 100 mbps), dan single point of failure di server.
Tapi ini semua bisa diatasi dengan perencanaan yang baik, rutinitas backup data yang dilakukan secara disiplin, dan strategi disaster recovery yang tepat (dimana proses recovery dapat dilakukan dalam hitungan menit saja).

Demikian sekilas informasi mengenai thin cllient & LTSP. Semoga bermanfaat.

Open Source = National Security

Dari berbagai argumentasi yang saya sampaikan kepada para client, tentang mengapa sebaiknya mereka memilih solusi yang open, salah satunya (terutama client pemerintah / departemen) adalah security.
Pada solusi yang open, antara lain kita dapat melakukan source code auditing, sehingga kita dapat yakin bahwa software tersebut memang aman, dan tidak ada “titipan” dari pemerintah asing.
Hal ini sulit (kalau tidak bisa disebut mustahil) dilakukan pada software tertutup / proprietary.

Dan ini bukan hanya khayalan / fantasi saya saja. Kasus seperti sabotase pipa gas Rusia adalah salah satu kasus yang paling spektakuler.

Namun, yang perlu dicemaskan adalah kasus-kasus yang low profile, atau tersembunyi. Seperti, pencurian data-data rahasia secara diam-diam. Dan ini, lagi-lagi, bukan hanya skenario khayalan, namun sudah terjadi secara rutin dengan adanya Internet — ada beberapa mafia identity theft yang secara rutin mencuri data-data pribadi Anda dan kemudian menjualnya di black market.
Bagaimana kalau yang tercuri ternyata kemudian adalah rahasia negara? Pastinya akan dapat dijual lebih dari mahal dari detil kartu Visa Gold, yang dihargai sekitar US$ 100 di black market.

Mudah-mudahan dengan pertimbangan ini (dan lain2nya), maka pemerintah kita akan semakin bersemangat untuk go open.

High-load Website (WordPress) Optimization : IlmuKomputer.com

Mr. Romi, founder of IlmuKomputer.com (IKC), yesterday asked me to help optimize this website. A bit about IlmuKomputer.com, it means “Computer Knowledge”, and contains a lot (and I mean it) free high quality computer tutorials.
As you can easily guess, the website is very popular. On peak hours, it’ll usually become overloaded, and will become unresponsive.

I’m only too happy if I can be of assistance to IKC’s team in their good cause. So I started working on it with help from one of my staff, Yopi.

Turned out that what we’ll be doing will be very different with what most others do. Anyway, IKC is a very popular website (and “slashdotted” daily, by leechers), so what works for most others doesn’t work for us.

The Bottlenecks

A bit of background – IKC uses WordPress as its CMS. It’s a very nice CMS, and makes your life easier. I’ve used WP myself since version 1.5.x. However, being database-based, there are a lot of points within its a WP-based infrastructure which can become a potential bottleneck. So if your website started to become popular with this CMS, you will need to start optimizing it.

After examining the situation for a while, it’s clear that MySQL was THE bottleneck. Output of top shows it using at least 8 times of CPU time than other service. Mr. Romi also told me how it kept on falling down on peak time.

Apache (and PHP, since it’s compiled as Apache module) is the next one; with each of its process using more than 10 MB of RAM. This may seem insignificant at first, but multiply that by (potentially) 150 processes – and you’ve got quite a memory hogger here.
Also CPU-usage wise; I’m quite surprised to see that each incoming request will cause the particular process’s CPU usage to spike to more than 50%.

Initial actions

I asked Mr. Romi to increase the size of MySQL’s internal cache size. He did, but the machine still fell down in daily basis.

He has also implemented caching on the app server (PHP) by way of wp-cache plugin. Still no joy too.

The Edge

I decided that we need to go straight to the “edge”, and stop the load there.

I proposed that I setup Squid in HTTP Acceleration mode. This way, most of the requests won’t even touch Apache, much less MySQL. Squid will bear most of the load, but since it’s very efficient, it should be able help a lot in making the website perform better.

Since I’ve got a few things to do myself, I asked Yopi to setup Squid in our test machine.
I just gave him pointers now and then, yet he managed to finish testing the setup and implement it in IKC’s server in just about 3.5 hours.

Then I showed him “tail -f /log/squid/access.log”, and we watched in amazement on how quickly the TCP_MISS lines are changing to TCP_HITs.
After about 12 hours, I increased the cache_mem size, and the TCP_HITs are slowly changing to TCP_MEM_HITs.

The result

Squid is working as we expected.

Average server load dropped from 30% plus to about 3%. While squid’s CPU usage increased from 0% to an average of only 2%. A very nice trade off.

After about a month, I checked the website’s logfiles, and saw some very nice numbers — traffic to IlmuKomputer.com has doubled ! Needless to say, Mr. Romi is very happy with it.

I also found that everyday there will be people downloading the contents using crawler software – such as Teleport Pro, wget, etc. I asked Mr. Romi if he’s got problem with it, and he says no. It is his mission to spread knowledge for free after all. So I let these leechers alone.

Come to think of it, it’s possible that these crawlers are the ones causing IKC server to fell down at peak hours. Example, Teleport Pro is able to download 10 links simultaneously at the same time. Then once any of it is finished, it will instantly start download the next one. When all 10 downloads access the database, and many crawlers at the same time, not many servers will be able to stand up to it. It’s like being machine gunned wearing just a simple leather cloth. If you have had the experience of having your website linked from Slashdot or Digg, you’ll understand what I’m talking about.

In this case, squid acted as a thick titanium armor, and taking most of the hits for your server. I suspect now the number of crawlers has increased than before, but it shouldn’t be a problem.

MySQL is a bit strange though. Sometimes its CPU usage can be as high as 160%. Thankfully this is very rare, so it’s probably just some internal clean-up routine.

One day, after happily watching the low load on the server for a while, suddenly everything froze. Even my SSH connection. Attempts to reconnect to the server failed.
After a while, I was finally able to connect again. Looking around, I noticed there’s some sort of bandwidth limiter daemon running on the server. After consulting with Mr. Romi, I killed it. The problem stopped.

Happy ending ?

I’m still monitoring the server as we speak for glitches. For example, squid seem to hang from time to time. This can be caused by anything from bad memory to problem with specific hardware configuration; so for now I’ve setup a cronjob which will restart it in certain intervals.
It seems to help, so I can troubleshoot the problem in peace.

Anyway, I’m sure that with the increased availability, even more people will visit the website (Ed: confirmed!). Then at some time in the future, we may find the server overloaded again.

In that case, there are still many things which we can do to keep IKC up & running in just one server :

  • Coral-ize internal links : Coral is a global cache with servers all over the world. It has proven to help people with overloaded servers to lighten their load (when slashdotted, digged, etc). With the Coralize plugin, all of your internal links will point to its Coral cache.

    Actually, for most people, this may be the easiest and the best step they can do. I can setup Squid because IKC has its own dedicated server. Not everyone does, I personally also own a (shared) webhosting account. Coral CDN (Content Distribution Network) is a very nice & easy solution to us. It’s rarely mentioned though, so here you go.

    If you’re not using WordPress, you can still utilize Coral CDN easily ! Just append .nyud.net:8080 to your links. For example, if you access http://harry.sufehmi.com.nyud.net:8080, you’ll actually access a Coral server, serving a copy of my website from its cache.
    I did say that it’s very easy, didn’t I ? 🙂

  • RAM Upgrade : This will enable Squid to have bigger memory cache size, therefore increasing its effectiveness significantly.
  • Roundrobin Edge servers : If the load is so high that even Squid is overwhelmed by it, then we can implement a cluster of Edge servers. People can volunteer their servers and have it act as the edge server for IlmuKomputer.com.

    The incoming requests are spread over the edge servers by way of Roundrobin DNS feature. It’s not the best way to do it, but it’s very easy and the cost is almost nothing.

  • Use lighttpd : Apache is a rather heavy webserver. I personally like its (amazing) flexibility (there’s a reason why it’s called the Swiss Army Knife of Webserver), but at times you’ll need something else. From my experience, lighttpd + fastcgi is very nice alternative to Apache + PHP. The features are now quite similar to Apache’s, but it’s much more lightweight. Its community is also quite helpful and happy to help a newbie within reasons. Recommended.
  • And many other ways

Last, we’d like to say thanks to Mr. Romi for giving us the opportunity, it was very interesting ! Hope IKC will become even more successful in the future, therefore benefitting even more people. Well done pak.

Solution : VisualBasic on Linux / non-Microsoft platform

One of the most asked questions I got from customer, in relation to their planned migration to Linux, is “Will Linux runs our legacy application ?”. And, 90% of the time, that legacy apps will be a VisualBasic 6 application. They feared that it won’t run on Linux, and their business will suffer.

I’ve always told my customers that “technical problems are not a problem“.
I can always help them find a solution for a technical issue. It’s the political ones that’s sometimes proved impossible to deal with 🙂
For example, once I met with a Canadian consultant, who happily informed me that he has been successful in making legacy apps running in older-than-dinosaur servers to talk with the web-apps on Linux, by creating a wrapper for these oldies. My inner geek bowed and saluted his hacking wizardry, and again my faith in our ability to overcome technical issues is strengthened.

Back to VisualBasic, what I did then is to observe their current situation. Each customer is unique, and a solution won’t always work for each of them. After the fact-finding session, then I usually able to prescribe the best solution for them.

Today I found one other possible solution for this.

Found a discussion on Slashdot, where it was noted that RealBasic is almost 100% compatible with VisualBasic and will run on non-Microsoft platform (even on Mac OS X).
Many will find its price (US$ 500) is way cheaper than to redevelop their corporate application.

Although not the solution for all, but it’s always good to have yet another choice, especially at this medium price range – it’s a clean solution and still affordable.

Just another reminder why I still check Slashdot from time to time — it’s not for the news, but the comments.
You guys rock. Thank you.

And to those looking to develop their corporate application – go web-based guys. Tying yourself to a single, proprietary, platform may prove very costly later.

And always, again, ALWAYS get the source code. Do not deal with a developer which will develop your corporate apps but won’t give you the source. Period.
You will thank me later for this, and when that happens, you may feel like transmitting a huge amount of money to my bank account. Don’t worry, it’s absolutely normal. In that case, just comment in this post, and you shall find my account details in your email in just a few minutes. 😀

OK, gotta code !

Open Source Business Intelligence

To be honest, I never expected to find an Open Source Business Intelligence (BI) application, yet here it is; Pentaho. Amazing.

It encompasses almost the whole spectrum of Business Intelligence concept; Reporting, Analysis, Data Mining, the BI platform, Dashboards/Management tools, and Workflow.

Pentaho, Open Source Business Intelligence

Good stuff. Now I know what to answer when my customers are asking me for an open BI solution.
Well done!

Filesystem SSH di Ubuntu

Seringkali saya perlu meng copy file antar server Unix / Linux, dengan parameter cp tertentu, seperti /u (updated files only). Namun fasilitas ini tidak ada di scp. Atau, perlu mounting remote filesystem, namun secara secure. Apa akal ?

Dengan ssfs / fuse, maka kita bisa melakukan ini dengan mudah.

Copy-paste di Ubuntu perintah-perintah berikut ini :

sudo aptitude install sshfs
sudo modprobe fuse
sudo sh -c “echo ‘fuse’ >> /etc/modules”

sshfs / fuse telah terpasang, dan otomatis akan selalu berjalan.

Untuk mounting, ketik perintah berikut ini :

sshfs user@hostname:/path/to/folder /local/folder

Maka kini kita bisa mengakses folder di server remote tersebut via /local/folder, nice!

Ketika sudah selesai, ketikkan perintah berikut ini :

sudo umount /local/folder

Ingin agar ini selalu dilakukan setiap booting ? Cukup edit file /etc/fstab, dan tambahkan baris seperti ini :

[hostname/IP]:/path/to/folder /local/folder fuse defaults 0 0

Semoga bermanfaat.

Ref: ubuntu-tutorials.com

Load testing – the quick & easy way

I’ve been doing load / capacity testing stuff for years now. In fact, my 2000 Master (S2) thesis was about this exact topic.

There are many ways to do this.
First, as with any other projects, you’ll need to define your requirements & objectives. Only then you can start choosing the tools and the methodologies.
This is probably where most people went wrong. Load testing doesn’t have to be hard to do, but when you haven’t defined the requirements & objectives, then chance are you’ll be doing it incorrectly.

Second, devise the methodology.
There are 3 main ways to simulate a real-life load on the IT infrastructure :

[ 1 ] Pure simulation :
Some software enable you to do this. You defined the infrastructure first in that simulation software – the servers, the network links, capacity of each items, how it interacts with each other, and so on.

This is fine, actually quite great, to get a big picture of our infrastructure.

[ 2 ] Network simulation :
Sometimes (or, many times) you’ll need to focus on the network. The network is probably the most important aspect of any IT infrastructure. Without a network, each computer is in its own island, and its usefulness diminished by a huge deal.

Some software will enable you to simulate your network, and then runs various test cases on it. You’ll then be able to measure the performance of the network – what’s max throughput ? latency ? any dropped packets ?
You may even find bottlenecks where you didn’t expect any.

[ 3 ] Application-level simulation :
Some software enable you to show you the behaviour of an application based on simulated load.

An example is the software used to simulate visitors to your webserver.

It’s really great because you’re actually seeing the real hardware performing to (close to) real requests. The results sometimes can be quite different from the testing we discussed on point 1.

There are many software now available to do our load testing. Some are free, some are easy to use, some are very expensive (to the tune of tens of thousand dollars), some are very hard to configure, and so on.

But if you have defined your requirements & methodology, it will be quite easy to pick the one suitable for your needs.

Load testing – the quick & easy way

If your needs are simple, for example you’re optimizing a webserver and just need a rough idea on how it currently performs, then you can just use OpenWebLoad. I simply haven’t found anything more simple.
I know, it has not been updated since 2001. But this is probably because it **works** 🙂

Once you got it installed, testing your webserver can be as simple as openload http://asiablogging.com 10
That command means you’d like to simulate 10 visitors hitting your website as quickly as possible.

That’s it 🙂
Very simple isn’t it? Yet it has helped me many times when I was optimizing our customers’ servers.

OK gotta go, hope you find it useful. And, happy new year !

PHP : Form Builder / Generator

My work is involving more and more PHP-based forms, so I decided today to find a good form generator to save my time.
Here’s my requirements :

  1. Willing to pay : I’m willing to pay for the right solution.
  2. Easy to use : Some of the script actually make life harder for you, go figure. I was looking to save time, not to spend more of it
  3. Flexible : I still need to apply my own style / formatting. The solution must allow me to do this, while comforming to the second requirement above
  4. Saving to database : some PHP form makers / generators will only allow you to submit the form to be send by email.
  5. Validation : surprisingly, quite a lot of the (even) commercial solutions out there are missing this.
  6. Source available : I need the source code available to me, in case of problems / need for further customizations. Some packages doesn’t give you this.

Too picky ? Well, my needs are quite advanced indeed.
Anyway, I spend almost two hours browsing around with no joy, until suddenly …. to my surprise (again), it seems that the best solution for my needs is an open source one – the HTML_QuickForm PEAR Package.

It’s easy to use (see the tutorial for yourselves).

It’s definitely very flexible; it provides 8 renderers and support several template engines ! It allows you to process the submission however you chooses with the process method – by email, to database, or you can also process it straight away in the same script.

And validations… it’s really sweet. You can choose whether to do it on the server or client side. When you choose to do it on the client, it automatically generate the needed Javascript codes for you. Awesome is not descriptive enough word for it.
There are many ready-to-use validation rules; alphanumeric, lettersonly, maxlength, minlength, etc – and the regex rule fulfill any other needs that’s otherwise not available.

With the source also available, it’s really hard for me to look for anything else. But if you think you’ve found something better, feel free to let me know.

Enjoy.

Buku Edubuntu

Setelah menyelenggarakan 2 kali installfest Edubuntu, ternyata masih ada lagi kejutan dari Pak Toosa – sebuah buku berjudul “Edubuntu, Pedoman Praktis Linux untuk Pendidikan“. Well done !

Bagi para praktisi pendidikan yang ingin beralih ke Linux, buku ini dapat membantu Anda mencapai tujuan yang diinginkan. Harga Rp 40.000 (sudah termasuk CD Edubuntu), bisa dipesan ke toosa@oo-linux.com. (Untuk luar Jabodetabek + ongkos kirim)

Sekali lagi selamat dan salut kepada Pak Toosa & Pak Rusmanto untuk hasil karyanya ini.

(open source) System Management solutions

When you manage one server, it’s easy to have control of every aspect of it.

But when you’re managing tens or hundreds of them, even Superman will have problems.

System Management and Monitoring (SMM) software can help. By enabling you to monitor (and control) everything from a single application, it will drastically simplify your job.
However, until just about a year ago, the following condition is true for OSS (Open Source Solution) SMM :

Pick any two :
1. easy to setup
2. robust
3. easy to use

Fortunately, things has changed now.

On NetworkWorld’s article titled Open source companies to watch, there are no less than 3 companies providing OSS SMM software. And there are still more OSS SMM software out there.

Things has started to change for the better for OSS system administrators.

Data Recovery in Linux

Willy posted an article discussing how to recover lost data in Linux.

I just would like to add a few more software that you can try for this purpose:

1. [ Foremost ]
Developed by the United States Air Force Office of Special Investigations and The Center for Information Systems Security Studies and Research, this may prove very beneficial in time of trouble.

2. [ DDrescue ]
This is the tool to use when you have a hard disk which is dying and have a load of bad sectors in it.
The tools is fully automatic, just run it and it will try its best to recover your data. It’s loaded with features; such as automatic merging (when there are several copies of the file, it will merge them to get the most complete version of the file), and robust recovery system; when run multiple times, it may be able to recover more of your data.
It also has logfile feature, which enable it to continue from the last point if it was interrupted.

Thanks to Willy for his post.

LowFatLinux.com — panduan bagi pemula Linux

Bagi yang sudah selesai install Linux dan kini kebingungan, “mau apa lagi ?”…. situs LowFatLinux.com bisa membantu Anda.

Ditulis dengan bahasa yang sangat sederhana dan mudah dimengerti oleh siapa saja.
Saya sempat takjub juga pada awalnya melihat bagaimana newbie-oriented situs ini, dan bagaimana isinya membuat siapa saja merasa nyaman membacanya.
Kemudian saya perhatikan nama webmasternya, lha…. ternyata, Bob Rankin, salah satu dari 2 orang pembuat Tourbus.

Tourbus adalah salah satu mailing list (tepatnya, newsletter) yang pertama kali saya ikuti. Isinya adalah panduan ke berbagai resources / fasilitas yang ada di Internet. Ini adalah zaman ketika Google belum ada, dan berbagai search engine lebih banyak berisi link iklan daripada content yang relevan 🙂
Tourbus, sama juga seperti LowFatLinux.com, ditulis dengan bahasa yang santai, mudah dicerna, namun tetap penuh dengan informasi. Milis ini telah membantu saya sampai akhirnya mampu untuk “berkendara” sendiri di information superhighway. Hebatnya lagi, milis ini masih terus exist, dan menurut pemiliknya, kini telah memiliki 100.000 pelanggan. Luar biasa.
Hanya di Tourbus : Warning: squirrels. 🙂

euh, jadi melantur…. jangan lupa, segera bookmark [ LowFatLinux.com ].

nb: satu lagi situs seperti ini yang cukup lengkap dan bagus; LinuxCommand.org

LTSP 4.2 @ Ubuntu 6.06 (Dapper Drake)

Berikut ini adalah catatan dari usaha saya memasang LTSP 4.2u2 di Ubuntu 6.06 (Dapper Drake).


UPDATE : Instalasi LTSP sekarang sudah sangat mudah sekali, terutama di Ubuntu Linux.
Silakan tinggal mengikuti panduan ini : Ubuntu LTSP Quick Install


Tutorial instalasi LTSP @ FC3 (Fedora Core 3) [ bisa dibaca disini ]

PENDAHULUAN

LTSP memungkinkan kita membangun sistim komputer yang canggih & berkinerja sangat bagus, walaupun menggunakan komputer tua yang sangat murah harganya.
Dengan LTSP, kita bisa menggunakan komputer Pentium I sekalipun (harga pasaran sekitar Rp 300.000) sebagai workstationnya, dengan kecepatan setara dengan Pentium IV !
Melihat ini, maka LTSP adalah solusi yang sangat cocok bagi berbagai institusi / perusahaan di Indonesia.

Razia software, walaupun kini semakin jarang diberitakan di media massa, juga masih terus berjalan. Yang terbaru kemarin ini menjadi target adalah beberapa stasiun TV di Indonesia.
Daripada membayar denda / membayar harga lisensi yang sangat mahal, mereka lebih memilih untuk beralih ke solusi open source.
LTSP, karena biayanya yang sangat ekonomis dan kemudahan pemakaian & perawatannya, menjadikannya sebagai salah satu solusi yang pantas untuk dipertimbangkan.

Ubuntu 6.06 memiliki banyak kelebihan dibanding berbagai distro lainnya, seperti :

  • Gratis dan malah diantar ke rumah kita – dengan layanan Ship It
  • Mudah dipasang dan digunakan
  • Ubuntu versi 6.06 LTS dijamin akan up to date terus selama 5 tahun. Berbeda misalnya dengan Fedora/Mandrake, yang bisa obsolete dalam waktu 1 tahun saja.

KEBUTUHAN

  1. Jaringan komputer / network, dengan bandwidth minimal 100 Mbps
  2. Komputer server, dengan spesifikasi minimal sebagai berikut (untuk 5 workstation) :

    Prosesor 800 MHz, memory 512 MB, hard disk 20 GB

  3. Komputer workstation :

    Prosesor 200 MHz,
    memory 32 MB,
    network card tipe PCI, yang ada di daftar yang ada di http://www.rom-o-matic.net,
    card VGA tipe PCI

  4. Akses Internet, atau DVD Ubuntu

RUJUKAN

  1. Instruksi instalasi LTSP 4.2
  2. Panduan instalasi Ubuntu 6 Server
  3. Ubuntu Indonesia & Ubuntu Guide

PANDUAN

Berikut adalah panduan instalasi LTSP 4.2 di Ubuntu 6.

  1. Install paket-paket yang diperlukan LTSP :

    sudo aptitude install nfs-user-server dhcpd tftpd portmap libwww-perl inetd

  2. Download ltsp-utils :

    mkdir /tmp/ltsp/
    cd /tmp/ltsp
    links http://ltsp.mirrors.tds.net/pub/ltsp/utils/ltsp-utils-0.25-0.tgz
    tar xzvf ltsp-utils-0.25-0.tgz
    cd ltsp-utils
    sudo ./install.sh
    sudo ./ltspadmin

  3. Kita telah menjalankan software ltspadmin. Pertama-tama, kita perlu download file-file instalasi ltsp 4.2u2.

    Pilih menu “Install / Update LTSP Packages”.

  4. Karena ini adalah instalasi awal, maka ltspadmin akan memunculkan pesan berikut ini:


    This is the first time installing LTSP packages, the
    Installation utility must first be configured.

    press to begin the configuration...

    Tekan Enter.

  5. Kemudian akan muncul layar konfigurasi, cukup tekan Enter terus (kecuali jika ada yang perlu Anda ubah), sampai muncul pilihan “Correct? (y/n/c)” – ketik y, lalu tekan Enter.
     
  6. Akan muncul daftar modul-modul LTSP yang tersedia. Pilih semuanya (dengan menekan spasi).

    Setelah terpilih semuanya, ketik “q”, maka ltspadmin akan mulai men download dan memasang semua modul tersebut.

  7. Berikutnya kita perlu mengkonfigurasikan LTSP.
    Pilih menu ketiga, yaitu “Configure LTSP”
     
  8. Tekan “C” untuk memilih menu “Configure the services manually”.

    Akan muncul menu bernomor 1 s/d 11. Jalankan menu-menu tersebut satu per satu, sampai akhirnya selesai semua. Lalu tekan “Q” untuk kembali ke menu utama, dan “Q” sekali lagi untuk keluar ke prompt.

  9. Untuk selanjutnya, yang perlu dilakukan adalah :

Selamat menikmati.

TIPS-TIPS

  • Jika komputer server terkoneksi ke Internet, sebaiknya dilindungi dengan firewall. Yang bisa saya rekomendasikan adalah Firehol, karena mudah digunakan dan bebas maintenance.
    Contoh konfigurasinya bisa dilihat di posting ini.
  • Jika Anda mengalami masalah, silahkan lihat di posting ini. Beberapa masalah yang sering terjadi dibahas solusinya disitu.

VERSI DOKUMEN

v1.0 – first release
v1.01 – minor correction re: ubuntu version numbering (thanks Andy)

Backing up in Linux

I’ve been managing many Windows and Unix servers in the last 10 years, and this I know for sure – backing up in Windows can be a painful experience, and most of the time it require significant investment in special backup software (which tend to cost thousands of dollars, usually more).
Even then, the software will become buggy once you start setting up complex backup scenario.

In Unix/Linux however, you have powerful scripting tools at your disposal, usually already included in the package. These tools are very flexible, enabling you to develop almost any kind of backup scheme.
It does require some sort of programming skill. Point-and-click admins will have a hard time at first, but let me tell you, do give it a try. You’ll find that it’s very much worth the trouble.

Both require investment in time & effort to develop a good backup strategy.

Before we progress, first here’s a few rules in regard to backup :

  • You can never have too many backups.
    I backed up my personal data to several locations – my other PC, and also my brothers’ PC. So in total, I have 3 copies of it.
    One day, the hard drive in my main PC broke down. So I came to my brother, and asked him to copy my data which is in his PC. To my surprise, he said that his hard drive just died too. I ended up with only a single copy of my precious data.
    I quickly replaced the dead hard drive, restored the only copy of my data there, and made a new backup script for it. Nowadays, my data is usually available in 5 or more locations.
  • Automate all of its processes.
    If it require even the tiniest amount of manual intervention, believe me, it will end up not being executed. For once of twice, you may still willing to intervene. But when you need to do that everyday, it just won’t happen.
  • Check your backup.
    Check the result / logs everyday. Try restoring the backup about every week. Do NOT skip this, or you will find out that the backup is actually not restorable when that very important server died on you.

These are the most important ones, and I confess to have suffered from one or more of it in the past.
You don’t have to, but it’s your choice.

Anyway, here’s a sample script to get you started backing up in Linux.

Backing up the whole hard drive, over the network.

#!/bin/bash
mkdir /mnt/backup
mount -t ext3 /dev/hdb1 /mnt/backup
chmod 777 /mnt/backup
cd /mnt/backup
/usr/bin/rsync -avuz --progress --rsh="ssh -l root -i /root/.ssh/id_dsa" 192.168.0.1:/ /mnt/backup

The 1st line is important – it tells the computer that we’d like this script to be processed by bash. Different shell has different syntax. So we need to be precise about this.

The 2nd line create the directory for mounting the backup drive. 3rd line mounts /dev/hdb1 (first partition of second IDE device) to /mnt/backup. 4th line gives full access to the drive. the 6th line does the actual backup process, copying only changed files on 192.168.10.101 to /mnt/backup.

It may seem simple at first, but make no mistake, rsync is one powerful tool. For example, quoted from rsync manual:

The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection, using an efficient checksum-search algorithm described in the technical report that accompanies this package.

This capability has enabled me to backup a 200 GB hard drive, over 100 Mbps network, in under 2 hours.
Without disturbing the 15+ users which are on that network as well. Simply amazing.

The next one is probably the kind of backup script you’d more often encounter – backup, compress, store to a safe location.


#!/bin/bash
tar cvf /backup/accounting-$(date +%Y%m%d).tar /home/accounting
bzip2 -9 /backup/accounting-$(date +%Y%m%d).tar
/usr/bin/scp -2 -i ~/.ssh/id_dsa /backup/accounting-$(date +%Y%m%d).tar.bz2 smith@192.168.0.10:/data/backup/

The 2nd line bundles up the whole content of /home/accounting into a single file named /backup/accounting-(today’s date).tar; example; /backup/accounting-20061230.tar would be the resulting file if this script is run on the 30th December 2006.
This trick needed to avoid the backup replacing the same file everytime it runs. This way, we’ll have multiple backups over time, instead of just one.

The 3rd line compresses the file above, as strong as possible (with the -9 switch)
The last line copies the file (now with .bz2 extension after compressed by bzip2) into directory /data/backup/ in a server with IP address of 192.168.0.10, as user smith.

The last example is a more complicated backup script.
I developed this to backup groups.or.id‘s (kinda like Yahoogroups) member database automatically, everyday to servers on different countries. Therefore, in case of disaster, the administrator can quickly restore the service on another server with little problem.

Backup member database, over the Internet.


#!/bin/bash
daftar_milis=( $(ls ~/) )

for element in $(seq 0 $((${#daftar_milis[@]} - 1)))
do

echo "---- MILIS: ${daftar_milis[$element]} ----" >> /backup/daftar-member-$(date +%Y%m%d).txt
/usr/bin/ezmlm-list ~/${daftar_milis[$element]} >> /backup/daftar-member-$(date +%Y%m%d).txt

done

/usr/bin/bzip2 -9 /backup/daftar-member-$(date +%Y%m%d).txt
/usr/bin/scp -2 -i ~/.ssh/id_dsa /backup/daftar-member-$(date +%Y%m%d).txt.bz2 harry@mydomain.com:/home/harry/backup/groups.or.id/

exit 0

A bit of background, the server uses ezmlm as the mailing list software, which is usually controlled by user “alias”.

2nd line is already interesting. Basically, we execute ls (which shows the content of directory) ~/. What is directory ~/ ? Well, the tilde character (~) is a shortcut for our home directory. So, when running this script as alias user, the “ls ~/” actually means “ls /var/qmail/alias/”
The result (list of files and directories) will then be stored in an (array) variable named “daftar_milis”

4th and 5th line sets us up for a looping. It will loop as many times as there are data in “daftar_milis”.

7th line will output a line, which is “—- MILIS: (current data in “daftar_milis”) —-“, and append ( >>) it into a file named /backup/daftar-member-(today’s date).txt

8th line runs ezmlm-list, which will list the members of the mailing list, and store in into the same file as above.

When all the data in “daftar_milis” has been processed, then the 12th line will be executed. It will compress the backup file with bzip2 compression.
Note that this compression algorithm is much more complex than standard Zip compression, therefore on a slow processor it may take a very long time to finish.

The 13th line will copy the backup file to a server somewhere on the Internet, on a secure tunnel encrypted with SSH2 protocol.

So there you are, a few examples to get you started backing up in Linux. Hope you find it useful.

Risk of buying Microsoft software

You can be forced to go through software license audit process, which will be done by the Microsoft’s Sales department.

Note the emphasized word above – the result of the audit may very well cause you to buy more software licenses, which you may not need. This is because Microsoft thinks that using its own auditing software, instead of a neutral third-party’s, is the unbiased way to do it.

“Wait a minute, how can Microsoft audit me at anytime whenever they feel like it, using their own tool ?”.
It’s because you’ve agreed to allow them do so.

And not only that, in the EULA, it’s also stated “Microsoft reserves all rights not expressly granted to you in this EULA“.

Good thing Auto Warehousing Co. can afford good (and I assume, expensive) lawyers, so they managed to avoid the audit. Others are not so lucky.

Note that other (proprietary) software vendors also tend to have similar EULA.

If you think this will not happen to you, think again.

Auto Warehousing Co. is just a victim which happen to be brave enough to speak out. In Indonesia, there were even raids done to big companies – we’re talking about insurance companies, banks, etc here. But they’d rather settle quietly than to acknowledge that they have licensing compliance issues.

Conclusion; if you’re a big company and able to hire good lawyers, you’ll be safer from this. And chance are, you can afford the settlement cost if it turned out that you have licensing issues.
But if you’re not, try to avoid purchasing software with crazy EULA like this. You might just regret it later.

Selamat untuk Mitra Netra

Ternyata ada 2 finalis dari Indonesia pada Stockholm Challenge 2006; selain e-Kebumen.net yaitu adalah Mitra Netra !

Judul dari proyek mereka adalah, “Innovation and the usage of IT Project for the improvement of Blind Community Life Quality in Education and Human Resources sector and the Public Awareness Development on the Disables Rights in Indonesia“.

Secara ringkas, Mitra Netra adalah sebuah yayasan yang membantu orang-orang cacat (buta) untuk tetap bisa mengakses PC, dan menikmati hal-hal lainnya selayaknya orang biasa.
Yang menarik, saya pernah mendapat kabar dari Pak Rusmanto, bahwa Mitra Netra ada menggunakan Linux di komputer-komputer yang digunakan oleh para siswanya. Wow.

Seperti juga tim e-Kebumen, setahu saya tim Mitra Netra juga diundang untuk menghadiri upacara pemberian hadiah di Swedia.
Well done, semoga akan terus menjadi semakin baik!

Berkarir dengan Linux

Beberapa hari yang lalu kebetulan saya bertemu dengan Pak Rusmanto dan fade2blac. Kita berbicara mengenai berbagai topik. Suatu ketika, pak Rus menyinggung informasi mengenai kursus Linux yang diadakan oleh Nurul Fikri.

Ternyata,

  • Lulusan kelas Linux nya “laku” keras.
  • Tidak bisa “memesan” mendadak – jika Anda tertarik untuk menjadikan salah satu peserta kursus tersebut sebagai pegawai Anda, maka harus menunggu. Wah, seperti memesan Honda Jazz saja 🙂 indent dulu, he he.
  • Ketika para lulusan SMA lainnya berkutat dengan gaji di bawah / pas UMR, lulusan kursus Linux Nurul Fikri bisa mengharapkan gaji sekitar Rp 2.000.000, fresh graduate.

Jadi bagi yang hampir lulus SMA dan ingin berkarir di bidang IT, ini adalah sebuah peluang yang sangat bagus, karena pesaingnya masih sangat sedikit.

Dan ini adalah peluang juga bagi investor – Anda bisa membuka cabang Nurul Fikri di kota Anda, dan melihat bagaimana lulusan LPK Anda sudah di-booking bahkan sebelum lulus. Tentunya, ini akan membuat LPK Anda sangat menarik bagi para calon siswa.

Bagi yang tertarik, silahkan bisa langsung menghubungi Pak Rus.

Open Source CRM

Hari ini saya mengevaluasi berbagai CRM Open Source untuk salah satu client. Di luar dugaan saya, ada banyak kejutan yang saya temukan:

1. Compiere kini sudah bisa menggunakan database PostgreSQL. (dulu hanya bisa menggunakan Oracle)

2. Ada banyak software CRM dan ERP yang open source, dan kualitas mereka sudah jauh lebih bagus dibandingkan terakhir kali saya memantau bidang ini.

3. Yang terbaik (YMMV) adalah … vTiger CRM dan SugarCRM.
Satu kelebihan vTiger CRM adalah menyediakan modul untuk customer, sehingga customer service bisa nyaris 100% dipindahkan ke Internet.

Cukup mengesankan, CRM dengan kualitas setara bisa berharga ribuan dolar, namun kita bsia dapatkan cuma-cuma.

Catatan tambahan:
client saya tertarik dengan SFA (Sales Force Automation) yang mobile, dengan memanfaatkan PDA. Setelah saya periksa, hanya SugarCRM Professional yang bisa menghasilkan output khusus untuk PDA, sehingga aksesnya bisa lebih cepat. Namun, SugarCRM Professional harganya mencapai ribuan dolar.
Alternatifnya, bisa menggunakan solusi dari Software on Sail boats, kelihatannya mereka banyak mendapat rekomendasi karena kualitasnya. Dan harganya pun sangat terjangkau.

Kini, tidak ada alasan lagi untuk memberikan customer service yang buruk, karena sudah banyak CRM berkualitas dan free yang dapat membantu Anda untuk meningkatkan kualitas layanan pelanggan Anda.