linux poison RSS
linux poison Email
0

HowTo use "diff" and "patch"

You have created a program "prog.c", saved as this name and also copied to "prog.c.old".
You post "prog.c" to users. Next, you make changes to prog.c and want to release a patch

$ diff -c prog.c.old prog.c > prog.patch

Now, users can get the latest updates by running.

$ patch < prog.patch
Read more
0

How To disable virtual consoles (Alt+F1)

You can easily access these virtual console using keyboard combination by simply hitting Alt+F1 (tty1) upto sixth (Alt+F6) tty console. you can easily login to these console.

There is a quick and effective way to disable these console.

1) Open /etc/inittab file and look for the section

1:2345:respawn:/sbin/mingetty --noclear tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

As you can see above there are 6 lines (one for each virtual console), simpily placing "#" at the begning of the line will disable that particular console. Suppose I need to disable console no 4,5,6 so, in this case my inittab file will look like ..

1:2345:respawn:/sbin/mingetty --noclear tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
#4:2345:respawn:/sbin/mingetty tty4
#5:2345:respawn:/sbin/mingetty tty5
#6:2345:respawn:/sbin/mingetty tty6
Read more
1

How to use smartmontools to monitor hard disk health?

Monitoring your hard disk health is a very important thing. You do not want to wake up one day, turn on your computer and suddenly your hard disk has crash and all your valuable data has gone with the wind. At that time crying would not get your data back. Like some people always say, prevention is better than cure. Apart from backing up your data regularly, monitoring the health of your hard disk is an essential task. It is to make sure any symptoms of bad sector or any failure can be detected earlier and steps to take care of it can be done sooner. One of the tool that can be used to do the job mentioned before is smartmontools. According to yum description, smartmontools are "Tools for monitoring SMART capable hard disks".

To install smartmontools on fedora:
# yum install smartmontools

Make sure your hard disk is smart capable
# smartctl -i /dev/sda
smartctl version 5.37 [i386-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar SE (Serial ATA) family
Device Model: WDC WD800JD-60LSA5
Serial Number: WD-WMAM9MA75547
Firmware Version: 10.01E03
User Capacity: 80,026,361,856 bytes
Device is: In smartctl database [for details use: -P show]
ATA Version is: 7
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Tue Jul 22 10:05:31 2008 MYT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Smart support is available for this hard disk and enabled

To monitor your hard disk health
# smartctl -H /dev/sda
smartctl version 5.37 [i386-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

To run test on your hard disk
# smartctl -t short /dev/sda

To see the selftest logs of smartctl
# smartctl -l selftest /dev/sda

See all options for smartctl
# smartctl -h

Manual for smartctl
# man smartctl
Read more
0

How To Create OpenSuse 11.0 Boot CD?

If problems occur booting your system using a boot manager or if the boot manager cannot be installed on the MBR of your hard disk or a floppy disk, it is also possible to create a bootable CD with all the necessary start-up files for Linux. This requires a CD writer installed in your system.

Creating Boot CDs
Change into a directory in which to create the ISO image, for example: cd /tmp

Create a subdirectory for GRUB:

mkdir -p iso/boot/grub


Copy the kernel, the files stage2_eltorito, initrd, menu.lst, and message to iso/boot/:

cp /boot/vmlinuz iso/boot/
cp /boot/initrd iso/boot/
cp /boot/message iso/boot/
cp /usr/lib/grub/stage2_eltorito iso/boot/grub
cp /boot/grub/menu.lst iso/boot/grub

Adjust the path entries in iso/boot/grub/menu.lst to make them point to a CD-ROM device. Do this by replacing the device name of the hard disks, listed in the format (hdx,y), in the pathnames with the device name of the CD-ROM drive, which is (cd). You may also need to adjust the paths to the kernel and the initrd—they need to point to /boot/vmlinuz and /boot/initrd, respectively. After having made the adjustments, menu.lst should look similar to the following example:

timeout 8
default 0
gfxmenu (cd)/boot/message

title Linux
   root (cd)
   kernel /boot/vmlinuz root=/dev/sda5 vga=794 resume=/dev/sda1 \
   splash=verbose showopts
   initrd /boot/initrd
Use splash=silent instead of splash=verbose to prevent the boot messages from appearing during the boot procedure.

Create the ISO image with the following command:

mkisofs -R -b iso/boot/grub/stage2_eltorito -no-emul-boot \
-boot-load-size 4 -boot-info-table -o grub.iso /tmp/iso

Write the resulting file grub.iso to a CD using your preferred utility. Do not burn the ISO image as data file, but use the option for burning a CD image in your burning utility.
Read more
1

How To Install Adobe Flash Player 10 into Firefox 3 under Fedora 9

The Flash Player has been improved to support new and cool internet browsing features such as 3D effects, advanced 3D layouts and more. You can visit Adobe Flash Player 10 cool features here.

Adobe Flash Player 10 Installation into Firefox 3 under Fedora 9
To install flash 10 with your Firefox browser under Fedora 9 OS, simply follow the following steps.

Step One
As root, uninstall any existing adobe flash player named flash-plugin rpm package like so
# rpm -qa | grep flash-plugin | xargs rpm -e
Uninstall Fedora’s gnash and swfdec rpm packages like so
# rpm -qa | grep swfdec | xargs rpm -e
# rpm -qa | grep gnash | xargs rpm -e

Uninstalling Fedora’s mozplugger rpm is not required but can be optional. Removing mozpluggerrc would disable other browser plugins of Firefox. If you wish to uninstall mozpluggerrc rpm package, simply
# rpm -qa | grep mozpluggerrc | xargs rpm -e

Step Three
For i386 machines, download and install every rpm that you can find from this link
To install, simply
# rpm -ivh curl-7.18.2-3.fc9.i386.rpm
# rpm -ivh curl-debuginfo-7.18.2-3.fc9.i386.rpm
# rpm -ivh libcurl-7.18.2-3.fc9.i386.rpm
# rpm -ivh libcurl-devel-7.18.2-3.fc9.i386.rpm

If you have any existing rpm package simply use rpm upgrade like so
# rpm -Uvh curl-7.18.2-3.fc9.i386.rpm
# rpm -Uvh curl-debuginfo-7.18.2-3.fc9.i386.rpm
# rpm -Uvh libcurl-7.18.2-3.fc9.i386.rpm
# rpm -Uvh libcurl-devel-7.18.2-3.fc9.i386.rpm

If you your platform is not of i386 type, locate your machine’s architecture here and download those rpms located under platform folder and install it into your F9 using the same rpm command as shown recently.
If you have earlier Fedora versions, you could get libcurl rpm of your old Fedora versions here

Step Four
Now, get back to your terminal window. Ensure that curl library is within the search path
# updatedb ; locate libcurl.so.3
/usr/lib/libcurl.so.3

Ensure also that Firefox binary can be found from default path as well

# which firefox
/usr/bin/firefox


Step Five
Download Adobe Flash Player rpm plugin and install it via rpm

# wget -c http://download.macromedia.com/pub/…081108.i386.rpm


# rpm -ivh flashplayer10_install_linux_081108.i386.rpm

The above adobe flash rpm is for compiled for Fedora 9 i386 platforms, no other rpms are available for other linux platforms as of this posting.

Step Six
Go to /usr/lib/mozilla/plugins/ folder
# cd /usr/lib/mozilla/plugins/
# ls -la /usr/lib/mozilla/plugins/
and make sure you have the following lines
lrwxrwxrwx 1 root root 39 2008-08-20 08:40 libflashplayer.so -> /usr/lib/flash-plugin/libflashplayer.so

Step Seven
Restart your firefox browser and head on to www.youtube.com to test your latest installation of adobe flash player 10.
Congratulations, you’ve just installed Adobe Flash Player 10!

Adobe Flash Player 10 Installation Verification
Launch Firefox browser and enter about:config as your URL . Check for adobe flash plugin lines.

Troubleshooting
Note that nspluginwrapper rpm is not required for successful installation using Fedora 9 OS.  If the installation does not work, and you have earlier version of Fedora, try installing nspluginwrapper rpm using yum like so
# yum -y install nspluginwrapper

[Via: http://www.ilovetux.com/]
Read more
2

Shadow password file /etc/shadow explained

The problem with traditional passwd files is that they had to be world readable in order for programs to extract information about the user: such as the users full name. This means that everyone can see the encrypted password in the second field. Anyone can copy any other user's password field and then try billions of different passwords to see if they match.

The shadow password file is used only for authentication and is not world readable -- there is no information in the shadow password file that a common program will ever need -- no regular user has permission see the encrypted password field. The fields are colon separated just like the passwd file.

Here is an example line from a /etc/shadow file:

nik:Q,Jpl.or6u2e7:10795:0:99999:7:-1:-1:134537220

nik - The user's login name.

Q,Jpl.or6u2e7 - The user's encrypted password known as the hash of the password.

10795 - Days since the January 1, 1970 that the password was last changed.

0 - Days before which password may not be changed. Usually zero. This field is not often used.

99999 - Days after which password must be changed. This is also rarely used, and will be set to 99999 by default.

7 - Days before password is to expire that user is warned of pending password expiration.

-1 - Days after password expires that account is considered inactive and disabled. -1 is used to indicate infinity -- i.e. to mean we are effectively not using this feature.

-1 - Days since January 1, 1970 when account will be disabled.

134537220 - Flag reserved for future use.
Read more
10

Why is GNU software better than proprietary software?

Proprietary software is often looked down upon in the free software world for many reasons:

    * It is closed to external scrutiny.
    * Users are unable to add features to the software
    * Users are unable to correct errors (bugs) in the software

The result of this is that proprietary software,

    * does not confirm to good standards for information technology.
    * is incompatible with other proprietary software.
    * is buggy.
    * cannot be fixed.
    * costs far more than it is worth.
    * can do anything behind your back without you knowing.
    * is insecure.
    * tries to be better than other proprietary software without meeting real technical needs.
    * wastes a lot of time duplicating the effort of other proprietary software.
    * often does not build on existing software because of licensing issues or ignorance

GNU software on the other hand is open for anyone to scrutinize it. Users can (and do) freely fix and enhance software for their own needs, then allow others the benefit of their extensions. Many developers of different expertise collaborate to find the best way of doing things. Open industry and academic standards are adhered to, to make software consistent and compatible. Collaborated effort between different developers means that code is shared and effort is not replicated. Users have close and direct contact with developers ensuring that bugs are fixed quickly and users needs are met. Because source code can be viewed by anyone, developers write code more carefully and are more inspired and more meticulous.

Another partial reason for this superiority is that GNU software is often written by people from academic institutions who are in the centre of IT research, and are most qualified to dictate software solutions. In other cases authors write software for their own use out of their own dissatisfaction for existing proprietry software - a powerful motivation.
Read more
1

Backup and Recovery solution for MySQL with Zmanda

 Backup Capabilities of Zmanda
    * ZRM for MySQL can backup multiple MySQL databases that are managed by the MySQL server.
    * It can backup multiple databases hosted on multiple MySQL servers.
    * It can also backup tables in a single database.
    * It can perform hot backups of the databases.
    * It supports multiple backup methods depending on the storage engine used by MySQL tables.
    * It has two levels of backups : full and incremental database backups.
    * It can use mysqldump, mysqlhotcopy, snapshots(Linux LVM/Solaris ZFS) and MySQL replication as various backup methods.
    * It creates consistent backup of the database irrespective of the storage engines used by the databases tables.
    * It supports SSL authentication between the local ZRM for MySQL and remote MySQL server to allow secure backups over Internet or across firewalls.
    * It can verify backed up data images.
    * Backup images can be compressed as well as encrypted using standard tool such as gzip, GPG, etc .
    * System administrator can abort backup jobs.

Recovery Capabilities
    * ZRM for MySQL makes it easy to recover backed up data.
    * It supports the use of an backup index that stores information about each backup run.
    * It has a reporting tool that can be used for browsing the index.
    * It can recover full and incremental database backups.
    * It does selective incremental restores based on binary log position or at a point in time . This permits recovery from database operator errors.
          o Such a point could be a point in time or a point in the binary logs of the Database.
          o ZRM for MySQL provides an easy way to filter in / filter out database events from the binary logs.
          o This helps in deciding what to restore and what to keep out.
    * Depending on the type of backups you have been doing, the backed up data could be recovered on to the same machine or to an entirely different machine.

If somebody accidentally drops a critical table in MySQL, the application no longer works. The solution to this problem is to utilize the (open source) Zmanda Recovery Manager.

You are a MySQL database administrator. You take regular backups of your MySQL database. Somebody drops a table critical to the MySQL application (for example, the "accounts" table in a SugarCRM application). The MySQL application no longer works. How can you recover from the situation?

The answer is MySQL binary logs. Binary logs track all updates to the database with minimal impact on database performance. MySQL binary logs have to be enabled on the server. You can use the mysqlbinlog MySQL command to recover from the binary logs.

A more comprehensive solution is to use the Zmanda Recovery Manager for MySQL. The mysql-zrm tool allows users to browse the binary logs and selectively restore the database from incremental backups:

      # mysql-zrm --action parse-binlogs   --source-directory=/var/lib/mysql
      /sugarcrm/20060915101613
      Log filename                 | Log Position | Timestamp         | Event Type |    Event
      /var/lib/mysql/my-bin.000015 | 11013        | 06-09-12 06:20:03 | Xid = 4413 | COMMIT;
      /var/lib/mysql/my-bin.000015 | 11159        | 06-09-12 06:20:03 | Query      | DROP TABLE IF EXISTS `accounts`;

Here we're doing selective recovery for incremental backups without the DROP customer table from the SugarCRM database. Do two selective restore commands to restore from the incremental backup done on Sept 15, 2006, without executing the database event DROP TABLE at log position 11159:

      # mysql-zrm --action restore  --backup-set sugarcrm \
         --source-directory=/var/lib/mysql/ sugarcrm/20060915101613/ \
         --stop-position 11014
      # mysql-zrm --action restore   --backup-set sugarcrm \
         --source-directory=/var/lib/mysql/ sugarcrm/20060915101613/ \
         --start-position 11160

See the Zmanda Recovery Manager for MySQL for more information: http://mysqlbackup.zmanda.com.
Read more
1

Setup and Configure openSUSE 11.0

Let me start off by saying that openSUSE 11.0 is the best Linux distribution I have ever used.  There are some rough edges surrounding KDE 4, but the package management in openSUSE 11.0 makes huge strides over that offered in previous versions.  If you want to get up and running with openSUSE 11.0 then there are likely a few customizations you’ll want to make.

Setup Multimedia
This is a perennial setup step on Linux distributions.  We’ll install the codecs needed to watch DVDs, handle MP3s, etc.  We’ll also setup firefox to be able to handle Windows media streams.
  1. YaST > “Software” > “Software Repositories”
  2. Click “Add”
  3. Select “Community Repositories”
  4. Select “Packman Repository” and “VideoLan Repository”
  5. YaST > “Software” > “Software Management”
  6. Uninstall xine-lib and install libxine1, w32codec-all, libdvdcss, k3b-codecs, and mplayerplug-in
  7. Open Firefox and type “about:config” into the address bar
  8. Right Click > “New” > “String”
  9. Enter “network.protocol-handler.app.mms”
  10. Enter “/opt/kde3/bin/kaffeine” (output of “which kaffeine” at command line)
  11. Check this -- here
Install NVIDIA drivers
If you have an NVIDIA card, then you’ll want to install the drivers.
  1. YaST > “Software” > “Software Repositories”
  2. Click “Add”
  3. Select “Community Repositories”
  4. Select “NVIDIA Repository”
  5. YaST > “Software” > “Software Management”
  6. Install “nvidia-gfxGO1-kmp-default”
Install CD ripper and ID3 tagger
For some reason, openSUSE 11.0 no longer ships with KAudioCreator or an ID3 tagger installed by default.  My guess would be that they haven’t been ported to KDE4 yet, but they’re nice to have, so we’ll go ahead and install them anyway.  We’ll also change KAudioCreator’s (stupid) default setting of not looking up CDDB information that hasn’t been cached on the local system.
  1. YaST > “Software” > “Software Repositories”
  2. Click “Add”
  3. Select “Community Repositories”
  4. Select “openSUSE BuildService - KDE:Community”
  5. YaST > “Software” > “Software Management”
  6. Install “kid3″ and “kdemultimedia3-CD”
  7. Open kaudiocreator
  8. Select “Settings” > “Configure KAudioCreator …” > “CDDB”
  9. Set lookup to “Cache and remote”
Upgrade WINE
WINE is continuing to evolve and getting closer every day to reaching maturity.  You’ll likely want the latest version instead of the one that was the latest when openSUSE shipped.
  1. YaST > “Software” > “Software Repositories”
  2. Click “Add”
  3. Select “Community Repositories”
  4. Select “openSUSE BuildService - Wine CVS Builds”
  5. YaST > “Software” > “Software Management”
  6. Do a search for wine and click the check mark until version upgrade is selected
Setup a static IP address
Having a static IP address is very nice when you want to remote desktop to your server or access it in some other way without worrying about what the IP address is.  There may also need to be some configuration done on your router for this one.  Or you may prefer to investigate DHCP reservations if your router supports them.
  1. YaST > “Network Devices” > “Network Settings”
  2. Under “Overview”, select your network card and click “Edit”
  3. Enter your static IP and save it
  4. Select “openSUSE BuildService - Wine CVS Builds”
  5. Under “Hostname/DNS”, enter your DNS servers and hit “Finish”
Setup remote desktop through NX
The two main remote desktop softwares for Linux are VLC and NX.  NX is much faster and KDE’s VLC server, KRfb, is broken openSUSE 11.0.  An NX server ships with openSUSE 11.0, but we want to install at least version 3.0 in order to do desktop sharing.  We’ll also open the SSH (NX is built on top of SSH) port in the firewall so that we can connect from another machine.
  1. Download the NX Linux packages
  2. Run “rpm -iv nxclient-3.1.0-2.i386.rpm”, “rpm -iv nxnode-3.1.0-3.i386.rpm”, and “rpm -iv nxserver-3.1.0-2.i386.rpm”
  3. Run “/usr/NX/scripts/setup/nxserver –install”
  4. Run “/usr/NX/bin/nxserver –keygen”
  5. In your NX client, open “Configure…” > “General” tab > “Key …”
  6. Copy the contents of “/usr/NX/share/keys/default.id_dsa.key” into the key window and save it
  7. Open “/usr/NX/etc/server.cfg”
  8. Change line 563 from ‘EnableSessionShadowingAuthorization = “1″‘ to ‘EnableSessionShadowingAuthorization = “0″‘ which will enable you to select “Shadow” in the client under the “General” tab’s “Desktop” framebox if you’d like to do desktop sharing
  9. YaST > “Security and Users” > “Firewall” > “Allowed Services”
  10. Allow “Secure Shell Server”
Setup Network File Share using Samba
WINE is continuing to evolve and getting closer every day to reaching maturity.  You’ll likely want the latest version instead of the one that was the latest when openSUSE shipped.
  1. YaST > “Software” > “Software Management”
  2. Install “samba” if it is not already installed
  3. YaST > “Network Services” > “Samba Server”
  4. Change sharing settings as you’d like and hit “Finish”
  5. Add a user to Samba by running “smbpasswd -a username” where username is the user you’d like to create.
  6. YaST > “Security and Users” > “Firewall” > “Allowed Services”
  7. Allow “Samba Server”

Read more
0

Ubuntu Wins !!!

After running the poll for almost 40 days and getting around 2600 votes, here are the results

 
Ubuntu Wins !!!
Read more
1

OpenVAS - Open Vulnerability Assessment System

OpenVAS stands for Open Vulnerability Assessment System and is a network security scanner with associated tools like a graphical user front-end. The core component is a server with a set of network vulnerability tests (NVTs) to detect security problems in remote systems and applications.

About OpenVAS Server

The OpenVAS Server is the core application of the OpenVAS project. It is a scanner that runs many network vulnerability tests against many target hosts and delivers the results. It uses a communication protocol to have client tools (graphical end-user or batched) connect to it, configure and execute a scan and finally receive the results for reporting. Tests are implemented in the form of plugins which need to be updated to cover recently identified security issues.

The server consists of 4 modules: openvas-libraries, openvas-libnasl, openvas-server and openvas-plugins. All need to be installed for a fully functional server.

OpenVAS server is a forked development of Nessus 2.2. The fork happened because the major development (Nessus 3) changed to a proprietary license model and the development of Nessus 2.2.x is practically closed for third party contributors. OpenVAS continues as Free Software under the GNU General Public License with a transparent and open development style.

About OpenVAS-Client


OpenVAS-Client is a terminal and GUI client application for both OpenVAS and Nessus. It implements the Nessus Transfer Protocol (NTP). The GUI is implemented using GTK+ 2.4 and allows for managing network vulnerability scan sessions.OpenVAS-Client is a successor of NessusClient 1.X.

 
The fork happened with NessusClient CVS HEAD 20070704. The reason was that the original authors of NessusClient decided to stop active development for this (GTK-based) NessusClient in favor of a newly written QT-based version released as proprietary software.

OpenVAS-Client is released under GNU GPLv2 and may be linked with OpenSSL.

You can download OpenVAS here:
OpenVAS Client
OpenVAS Server
Read more
0

Grendel - Web Application Security Testing Tool

Grendel-Scan is an open-source web application security testing tool. It has automated testing module for detecting common web application vulnerabilities, and features geared at aiding manual penetration tests. The only system requirement is Java 5; Windows, Linux and Macintosh builds are available.

Current stable version:

1.0  Platform File

Windows Grendel-Scan-v1.0-win32.zip

Linux Grendel-Scan-v1.0-linux.zip

Macintosh Grendel-Scan-v1.0-mac.zip

Source Grendel-Scan-v1.0-src.zip

JavaDocs Grendel-Scan-v1.0-javadoc.zip
Read more
0

Backup and Restore PostgreSQL DB

SQL Dump
The idea behind the SQL-dump method is to generate a text file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:

pg_dump dbname > outfile

As you see, pg_dump writes its results to the standard output

Restoring the dump
 The text files created by pg_dump are intended to be read in by the psql program. The general command form to restore a dump is

psql dbname < infile

where infile is what you used as outfile for the pg_dump command. The database dbname will not be created by this command, you must create it yourself
Read more
0

How To Perform load tests and benchmarking website on Apache

If you need to simulate a load on an Apache server (or any web server actually), you can use Apache Bench, which is included in the standard Apache HTTPd distribution. This tool will launch connections to your webserver as instructed to simulate multiple users and will help you to tune your Apache settings.

You can find the synopsis at the Apache website. Most common options are :

    * -n : number of requests to perform
    * -c : number of concurrent requests

Other options allow you to control precisely the request to send, proxy settings, user authentication, cookies and much more.
Read more
3

How To Disable ipv6 on SuSE Linux

For some strange reason, ipv6 is switched ON by default in SuSE Linux.
To check whether you are currently running ipv6, run the following command as root:

# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0F:1F:89:8F:D5
          inet addr:192.168.1.100  Bcast:140.171.243.255  Mask:255.255.254.0
          inet6 addr: fe80::20f:1fff:fe89:8fd5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:33386388 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2947979 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2211978470 (2109.5 Mb)  TX bytes:380978644 (363.3 Mb)
          Base address:0xdf40 Memory:feae0000-feb00000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:895 errors:0 dropped:0 overruns:0 frame:0
          TX packets:895 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:76527 (74.7 Kb)  TX bytes:76527 (74.7 Kb)

If you have lines containing inet6 as above, then your machine IS running ipv6.

How to disbling ipv6 on SuSE Linux

To disable ipv6 completely, run the following commands as root:
# echo “alias net-pf-10 off” >> /etc/modprobe.conf.local
# echo “alias ipv6 off” >> /etc/modprobe.conf.local
Restart the machine.

Once your machine has rebooted, rerun the ifconfig command and verify that the inet6 lines have been removed. Now, Your machine is running without IPV6.



Read more
0

How to DENY SSH access for certain user on Linux

Due to some security reason, you may require to block certain user SSH access to Linux box.

Edit the sshd_config file, the location will sometimes be different depend on Linux distribution, but it’s usually in /etc/ssh/.

Open the file up while logged on as root:

vi /etc/ssh/sshd_config

Insert a line:

DenyUsers username1 username2 username3 username4

Referring to #man sshd_config:

    DenyUsers
    separated by spaces. Login is disallowed for user names that
    match one of the patterns. â*â and â?â can be used as wildcards
    in the patterns. Only user names are valid; a numerical user ID
    is not recognized. By default, login is allowed for all users.
    If the pattern takes the form USER@HOST then USER and HOST are
    separately checked, restricting logins to particular users from
    particular hosts.


Save it and restart SSH services. Basically username1, username2, username3 & username4 SSH login is disallowed.

/etc/init.d/sshd restart

Reference: http://www.planetmy.com/blog/
Read more
0

MAC like dock for Ubuntu

Kiba dock and AWN are two dock that resembles MAC dock.
You can try them from
http://www.kiba-dock.org/
https://launchpad.net/awn

Here is a tutorial to setup kiba-dock in Ubuntu
http://ubuntuforums.org/showthread.php?t=268645

Here is a tutorial to setup awn
http://wiki.awn-project.org/Installation

While Kiba dock focuses on launchers, AWN supports both launchers and task list.
While both of the provide you with fancy desktop, they requires composite( like copiz fusion) to be enabled. So be aware if you have some old system.
Read more
0

10 schicke LinuxDesktops

Click on the images for larger size.

1)


2)


3)


4)


5)


6)


7)


8)


9)


10)


via linuxhaxor.net

How does your desktop look? Share with us.
Read more
0

PHP Predefined variables

PHP provides a large number of predefined variables to all scripts. The variables represent everything from external variables to built-in environment variables

    * Superglobals — Superglobals are built-in variables that are always available in all scopes
    * $GLOBALS — References all variables available in global scope
    * $_SERVER — Server and execution environment information
    * $_GET — HTTP GET variables
    * $_POST — HTTP POST variables
    * $_FILES — HTTP File Upload variables
    * $_REQUEST — HTTP Request variables
    * $_SESSION — Session variables
    * $_ENV — Environment variables
    * $_COOKIE — HTTP Cookies
    * $php_errormsg — The previous error message
    * $HTTP_RAW_POST_DATA — Raw POST data
    * $http_response_header — HTTP response headers
    * $argc — The number of arguments passed to script
    * $argv — Array of arguments passed to script
Read more
1

HTTP Response Codes

The following are webserver response codes that are used by webservers of all platforms.
Code     Description
100     Continue
101     Switching protocols
200     OK
201     Created
202     Accepted
203     Non-authoritative information
204     No content
205     Reset content
206     Partial content
300     Multiple choices
301     Moved permanently
302     Moved temporarily
303     See other
304     Not modified
305     Use proxy
307     Temporary redirect
400     Bad request
401     Unauthorized
402     Payment required
403     Forbidden
404     Not Found
405     Method not allowed
406     Not acceptable
407     Proxy authentication required
408     Request timeout
409     Conflict
410     Gone
411     Length required
412     Precondition failed
413     Request entity too large
414     Request URI too large
415     Unsupported media type
416     Requested range not satisfiable
417     Expectation failed
500     Internal server error
501     Not implemented
502     Bad gateway
503     Service unavailable
504     Gateway timeout
505     HTTP version not supported
Read more
0

Convert image formats using ImageMagick

The ImageMagick suite of programs allows you to manipulate images from the command-line, making it much simpler to convert large numbers of pictures or even for that single quick image conversion.

The convert tool is used to convert pictures from one format or size to another. The easiest method is to simply convert an image from one format to another using:

$ convert image.jpg image.png

This will convert image.jpg from a JPEG to a PNG and save it as image.png.

To convert and resize an image, use:

$ convert -size 50x50 image.jpg image.png

This will convert from JPEG to PNG format and resize the resulting image to 50x50 pixels. Likewise, you can use convert to rotate an image:

$ convert -rotate 90 image.jpg image.png
Read more
0

How to use/configure nscd for caching

nscd (Name Service Cache Daemon) is a GNU C Library -- A daemon which handles passwd, group and host lookups for running programs and caches the results for the next query. You should install this package only if you use slow Services like LDAP, NIS or NIS+

The nscd service comes as part of glibc , which means every Linux distribution will provide it. It is also extremely simple to set up. Once installed, edit the /etc/nscd.conf file to look similar to this:

 server-user nscd
  debug-level 0
  reload-count unlimited
  paranoia no
  enable-cache passwd yes 
  positive-time-to-live passwd 3600 
  negative-time-to-live passwd 20 
  suggested-size passwd 211 
  check-files passwd yes 
  persistent passwd yes 
  shared passwd yes 
  
  enable-cache group yes 
  positive-time-to-live group 3600 
  negative-time-to-live group 60 
  suggested-size group 211 
  check-files group yes 
  persistent group yes 
  shared group yes 
  
  enable-cache hosts no 

Now start the nscd service. The above configuration tells nscd to cache group and passwd entries and to let them persist for 3600 seconds.

Once nscd has started and has a few cached entries under its belt -- if you are already logged in and then disconnect from the network -- you will still be able to continue using the system just as if you were on the network -- apart from accessing shares and printers, utilising Kerberos, and performing new login sessions.
Read more
1

How to Manage disk Quotas for users

Quotas are defined per-filesystem. Most distros support quotas, although not all do it out-of-the-box, and you may have to install the quota package. To enable quota support, edit /etc/fstab as root and add the usrquota and grpquota options to the filesystems you wish to enable quota support for. For instance:

/dev/hda3 /home ext3 defaults,nosuid,nodev,usrquota,grpquota 1 2

Once you have made the changes, remount the filesystem(s) you have changed:

# mount -o remount /home

To check that quota support is indeed enabled, execute:

# quotacheck -augmv

This will instruct quotacheck to check all filesystems for user and group quotas without remounting them as read-only. Now you can enable quotas with the quotaon command:

# quotaon -augv

Once quotas have been turned on, use edquota to edit the quotas for a particular user:

# edquota -u nikesh

This will open the default system editor (usually vim) where you can edit the hard and soft limits for both blocks and inodes for each filesystem that supports quotas.

You can then view current quota usage by using the repquota tool:

# repquota -a

Once a soft quota has been exceeded, the user is notified once that they have exceeded their quota, but will be able to continue writing to the system unless they reach the hard quota; at which point, any new files created will be 0 bytes in size.
Read more
0

How To Transfer existing Linux system to a new hard drive

For this we required Linux live disc (something like Knoppix Linux)

1) Hook up new drive. I used secondary master, hdc.

2) Reboot your system with live CD, this saves complications of dynamic directories and such.

3) Partition the drive with cfdisk or equivalent. Make sure you mirrored the existing setup, you can also have the new partitions with much larger size.

4) Time to format the new drives partitions (ext3, for example):

mkfs -t ext3 /dev/hdc1
mkswap /dev/hdc2
mkfs -t ext3 /dev/hdc3

5) Mount the partitions of old and new HD:

mkdir /mnt/oldmount /dev/hda1 /mnt/old
mkdir /mnt/newmount /dev/hdc1 /mnt/new

6) Copy the old partition onto the new one:

cp --recursive --verbose /mnt/old/bin /mnt/new/bin

7) Get a drink, take a walk, etc. This is going to take a while.

8) Repeat steps 5-7 as needed for other partitions.

9) Once it’s finished, check out the new fstab:

vi /mnt/new/etc/fstab

10) Verify that all the partitions are still arranged how you want them set up.

11) Run Grub on the new drive to make it bootable

Check here
exit

12) If all went well, you should be able to shut down, swap the new hard drive into the position of the old one, and boot into your much roomier new hard drive
Read more
1

Lighting up LAMP in Ubuntu 8.04 Hardy Heron

This guide will help newbies set up a fully working LAMP (Linux, Apache, MySQL, PHP) server using on Ubuntu 8.04 Hardy Heron. This will allow you to use various PHP applications such as the popular phpBB forums and WordPress blog in addition to the basic HTML pages and files.

Install Packages

First, install the required packages by typing the following into the terminal:
sudo apt-get install apache2 php5 libapache2-mod-php5 mysql-server libapache2-mod-auth-mysql php5-mysql phpmyadmin
This will install Apache, PHP, MySQL, their respective modules, and phpMyAdmin. Enter your preferred root password for mySQL when prompted and choose apache2 to be automatically configured.
If you’d like, you can test whether if Apache is working properly by going to http://localhost/ in your web browser.
Optionally, if you want computers on the same network to connect to the server you may want to edit /etc/mysql/my.cnf by invoking the command:
sudo gedit /etc/mysql/my.cnf
Then, replace the following line with your own IP address.
bind-address = 127.0.0.1

Configuring

To enable PHP and MySQL to work together, edit the php.ini file:
sudo gedit /etc/php5/apache2/php.ini
Then un-comment the following line. Save and close the file.
;extension = mysql.so

Accessing PHPMyAdmin

Edit the Apache configuration file:
sudo gedit /etc/apache2/apache2.conf
Add the following line to the bottom of the file. Save and close.
Include /etc/phpmyadmin/apache.conf
Restart Apache.
sudo /etc/init.d/apache2 restart

Testing PHP

Create and edit a file called testphp.php in your /var/www/ folder:
sudo gedit /var/www/testphp.php
Insert the following text inside that file and save:
Go back to your web browser and navigate to:
http://localhost/testphp.php
The PHP page should display. If a download window appears instead, something went wrong. Try reinstalling php5 and libapache2-mod-php5.

Testing phpMyAdmin

Navigate your web browser to:
http://localhost/phpmyadmin/
If the phpMyAdmin login page displays, then… You are done. That wasn’t so hard.
Now that you have LAMP running, why not try some applications?

Note: The root directory of your web server is /var/www/


Read more
0

Building A Linux Filesystem From An Ordinary File

First, create a 20MB file (you could use any size for your FS) by executing the following command:

# dd if=/dev/zero of=disk-image count=40960
40960+0 records in
40960+0 records out
20971520 bytes (21 MB) copied, 0.225793 s, 92.9 MB/s

Next, to format this as an ext3 filesystem, you just execute the following command:
# /sbin/mkfs -t ext3 -q disk-image
disk-image is not a block special device.
Proceed anyway? (y,n)

Next, you need to create a directory that will serve as a mount point for the loopback device.
# mkdir fs

Now mount the this filesystem (file)
# mount -o loop=/dev/loop0 disk-image fs

Check if your FS is mounted
# mount
/dev/hda2 on / type ext3 (rw,noatime,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/hda5 on /mnt/D type vfat (rw)
/dev/hda1 on /mnt/C type ntfs (rw)
/dev/hda7 on /cache type ext3 (rw,noatime)
/root/disk-image on /root/fs type ext3 (rw,loop=/dev/loop0)

Now, move to the mounted directory and create some directory and files on this new FS which we have created using file.
# ll
total 14
drwx—— 2 root root 12288 Apr 25 23:30 lost+found
drwxr-xr-x 2 root root 1024 Apr 25 23:33 nik
drwxr-xr-x 2 root root 1024 Apr 25 23:33 nikesh
Read more
0

Install windows Application using wine-doors (single Click)

Wine-doors is an application designed to make installing windows software on Linux, Solaris or other Unix systems easier. Wine-doors is essentially a package management tool for windows software on Linux systems. Most Linux desktop users are familiar with package management style application delivery, so it made sense to apply this model to windows software.
Wine Doors has community that constantly tests existing windows applications for compatibility of wine and adds them to the Wine Doors repository. So these applications are available for you to be installed with a single click using synaptic package manager kind of interface. These are also known as application packs.

Compatible Applications & Games

    * Acrobat Reader
    * Quicktime
    * iTunes
    * Winamp
    * Call of Duty
    * Far Cry
    * Max Payne
    * Max Payne 2
    * Battlefield 1942
    * World of Warcraft
    * Wizardry 8

Installations:

Ubuntu:
Download the wine-doors deb pagake from download section. Execute following command from the download folder to install wine-doors deb package.

sudo dpkg -i wine-doors*_all.deb

Now start Wine Doors by clicking on Applications > System Tools > Wine Doors .

For Fedora Users: They can install using rpm command -- rpm -ivh wine-doors-0.1.2-1.i386.rpm
Read more
1

running "rm -rf /"

Read more
0

PostgreSQL setup and configuration for md5 authentication

By default, connection via TCP/IP is disabled. And for authentication IDENT method is used. Please refer to the PostgreSQL Administrator's Guide..

To enable TCP/IP connections, edit the file /etc/postgresql//main/postgresql.conf

Locate the line #tcpip_socket = false and change it to tcpip_socket = true.

By default, the user credentials are not set for MD5 client authentication. So, first it is necessary to configure the PostgreSQL server to use trust client authentication, connect to the database, configure the password, and revert the configuration back to use MD5 client authentication. To enable trust client authentication, edit the file /etc/postgresql//main/pg_hba.conf

Comment out all the existing lines which use ident and MD5  client authentication and add the following line:

    local all postgres trust username

Then, run the following command to start the PostgreSQL server:

    sudo /etc/init.d/postgresql start

Once the PostgreSQL server is successfully started, run the following command at a terminal prompt to connect to the default PostgreSQL template database

    psql -U postgres -d template1

The above command connects to PostgreSQL database template1 as user postgres. Once you connect to the PostgreSQL server, you will be at a SQL prompt. You can run the following SQL command at the psql prompt to configure the password for the user postgres.

    template1=# ALTER USER postgres with encrypted password 'your_password';

After configuring the password, edit the file /etc/postgresql//main/pg_hba.conf to use MD5 authentication:

Comment the recently added trust line and add the following line:

    local all postgres md5 username

Read more
0

Quick Configuration of AIDE (Advanced Intrusion Detection Environment)

AIDE (Advanced Intrusion Detection Environment) is a free replacement for Tripwire(tm). It generates a database that can be used to check the integrity of files on server. It uses regular expressions for determening which files get added to the database. You can use several message digest algorithms to ensure that the files have not been tampered with.

Default configuration of aide is quite fine. But we are going to tweak it slightly more.

Send the report

Reports which are created once a day can be sent to a custom address. you need to change the variable MAILTO to which ever address you like. Default is to send them to root on localhost.
To change it, open and edit /etc/default/aide

Configuring aide

Most AIDE configuration is in file /etc/aide/aide.conf. This file is pretty well documented and default rules are descent but we are going to make some slight changes.

AIDE aims at reporting files that changed since the last snapshot (/var/lib/aide/aide.db). A good security measure is to keep that file on a read-only device such as a floppy disk or a cdrom. If your machine has such a device, you could use the snapshot from that device. So let say that you have a copy of aide.db on a cdrom.

To use that snapshot, you could change:
database=file:/var/lib/aide/aide.db
to
database=file:/media/cdrom/aide.db 
instead. That way, if an intruder get into your machine, he won’t be able to modify aide.db.

By default, AIDE checks for changes in Binaries and Libraries directories. Those changes are matched to the BinLib rule, which basically check for any changes in permissions, ownership, modification, access and creation date, size change, md5 and sha1 signature, inode, number of links and block count. Then, it also check for modifications in the log files against the rule Logs. Because log files tends to grow, you cannot use a signature there and you also have to asked aide not to check for size modification (S). Okie, this should be enough to get to understand how aide works. Reading through /etc/aide/aide.conf is a good place to learn more.

To make aide /etc/. To do so, added: /etc ConfFiles in /etc/aide/aide.conf, this will check for changes in /etc/.

Updating aide

aide is run on a daily basis through the script /etc/cron.daily/aide. Default settings in /etc/default/aide tells aide to update it’s database. Using database_out value in /etc/aide/aide.conf, aide is going to output a new database any time it runs in /var/lib/aide/aide.db.new if you kept the default settings.

Any time you will install new packages, change some configuration settings… it will be worth using an up-to-date database so aide won’t report any changes or addition in /etc/mynewsoft, /bin/mynewsoft …
So, when you install new softwares, make some configuration changes …, run:
# /etc/cron.daily/aide
Then, check in the report that modifications were only brought to the files you intended to modify and that added files are only coming from packages you have just installed.

Once you are sure that everything is fine, copy the new database to whatever place your database points to (cdrom, floppy, somewhere on you filesystem….).This way, you will get lighter reports next time aide runs.
Read more
0

How To increase the limit for file handles

There are cases when you get a lot of error messages about "running out of  file handles", increasing this limit of this handler can solve this issue. To change the value, just write the new number into the file as below:

# cat /proc/sys/fs/file-max
8192

# echo 943718 > /proc/sys/fs/file-max

# cat /proc/sys/fs/file-max
943718

This value also can be changed using "sysctl" command. To make the change permanent, add the entries to /etc/sysctl.conf

# vi /etc/sysctl.conf
fs.file-max = 943718
Read more
0

(r,w,x) Access Permissions For Files And Directories

Access Permission
File
Folder
Read (r)
Users can open and read the file.
Users can view the contents of the directory. Without this permission, users cannot list the contents of this directory with ls -l, for example. However, if they only have execute permission for the directory, they can nevertheless access certain files in this directory if they know of their existence.
Write (w)
Users can change the file: They can add or drop data and can even delete the contents of the file. However, this does not include the permission to remove the file completely from the directory as long as they do not have write permissions for the directory where the file is located.
Users can create, rename or delete files in the directory.
Execute (x)
Users can execute the file. This permission is only relevant for files like programs or shell scripts, not for text files. If the operating system can execute the file directly, users do not need read permission to execute the file. However, if the file must me interpreted like a shell script or a perl program, additional read permission is needed.
Users can change into the directory and execute files there. If they do not have read access to that directory they cannot list the files but can access them nevertheless if they know of their existence.

Read more
0

Quick and simple usage of tcpdump (packet sniffer)

Tcpdump is a popular computer network debugging and security tool which allows the user to intercept and display TCP/IP packets being transmitted or received over a network to which the computer is attached. Tcpdump allows us to precisely see all the traffic and enables us to create statistical monitoring scripts.

At an ethernet segment, tcpdump operates by putting the network card into promiscuous mode in order to capture all the packets going through the wire. Using tcpdump we have a view on any TCP/UDP connection establishment and termination and we can measure the response time and the packet loss percentagesTo print

Some simple usage:

all packets arriving at or departing from 192.168.0.2
# tcpdump -n host 192.168.0.2

To print traffic between 192.168.0.2 and either 10.0.0.4 or 10.0.0.5:
# tcpdump -n host 192.168.0.2 and \( 10.0.0.4 or 10.0.0.5 \)

To print all IP packets between 192.168.0.2 and any host except 10.0.0.5:
# tcpdump ip -n host 192.168.0.2 and not 10.0.0.5

To print all traffic between local hosts and hosts at Berkeley:
# tcpdump net ucb-ether

To print all ftp traffic through internet gateway xx:
# tcpdump 'gateway xx and (port ftp or ftp-data)'

To print traffic neither sourced from nor destined for local hosts (if you gateway to one other net, this stuff should never make it onto your local net).
# tcpdump ip and not net localnet

To print the start and end packets (the SYN and FIN packets) of each TCP conversation that involves a non-local host.
# tcpdump 'tcp[13] & 3 != 0 and not src and dst net localnet'

To print IP packets longer than 576 bytes sent through gateway xx:
# tcpdump 'gateway xx and ip[2:2] > 576'

To print IP broadcast or multicast packets that were not sent via ethernet broadcast or multicast:
# tcpdump 'ether[0] & 1 = 0 and ip[16] >= 224'

To print all ICMP packets that are not echo requests/replies (i.e., not ping packets):
# tcpdump 'icmp[0] != 8 and icmp[0] != 0"
Read more
0

Tux3 Versioning Filesystem

"Since everybody seems to be having fun building new filesystems these days, I thought I should join the party, began Daniel Phillips, announcing the Tux3 versioning filesystem. He continued, "Tux3 is a write anywhere, atomic commit, btree based versioning filesystem. As part of this work, the venerable HTree design used in Ext3 and Lustre is getting a rev to better support NFS and possibly become more efficient." Daniel explained:
"The main purpose of Tux3 is to embody my new ideas on storage data versioning. The secondary goal is to provide a more efficient snapshotting and replication method for the Zumastor NAS project, and a tertiary goal is to be better than ZFS."
In his announcement email, Daniel noted that implementation work is underway, "much of the work consists of cutting and pasting bits of code I have developed over the years, for example, bits of HTree and ddsnap. The immediate goal is to produce a working prototype that cuts a lot of corners, for example block pointers instead of extents, allocation bitmap instead of free extent tree, linear search instead of indexed, and no atomic commit at all. Just enough to prove out the versioning algorithms and develop new user interfaces for version control."

The question is of course which file system will make the run: Tux3 is still at the beginning, while Btrfs could see a first beta in the next months. But there are still rumors that ZFS might be released under the GPL, and Hammer could also be implemented for Linux.

Either way, exciting times for file systems on Linux are ahead
Read more
Related Posts with Thumbnails