linux poison RSS
linux poison Email
2

Search Google Using UNIX Command Line Shell - Goosh

"Stefan Grothkopp has come up with a pretty neat tool called goosh. It's essentially a browser-oriented, shell-like interface that allows you to quickly search Google (and images and news) and Wikipedia and get information in a text-only format.

The interface looks quite similar to a Unix Shell and can be easily adapted by anyone who is bored using the normal web browser way of searching.

 
All you get is a blank screen and a prompt to enter commands. This command prompt behaves as both search box and control button. For example, if you have entered linux and pressed enter, you will be having Google search results for linux. To search, linux in Wikipedia, you will have to enter wiki linux in command prompt. Type h or help to get list of other available commands, like images to search for images only or lucky which acts like I’m feeling lucky button.
Read more
0

Few OpenSuSe mirrors to serve Indian requests

Recently, it became evident that users in India don’t get good mirrors. This was solved by configuring a few German and US mirrors to serve users from India.

Courtesy of Adrian Reber from Esslingen University of Applied Sciences, there is an illustrative screenshot that visualizes the efficacy of this. The world map shows accesses to their openSUSE mirror by country (live view). In openSUSE’s MirrorBrain configuration, this mirror is set up to receive German, Danish, Polish, and Indian requests.

The background is that India has bad connectivity to neighbouring countries, but good connection to German and US mirrors. Therefore, now a few German and US mirrors are configured to serve India. The screenshot below demonstrates this for the mirror of the Esslingen University of Applied Sciences:


The world map clearly shows how the mirror gets nearly exclusively German requests, as well as those from India. The same happens for some other German and some US mirrors.

Note that if a mirror in India should become available (would be nice!), it would automatically be preferred, and the other mirrors become fall-back mirrors.
Read more
5

Photo Collage Maker - Shape Collage

Now making photo collage in linux is made easy. This program is called shape collage and being developed in Java, offers the advantage of being multi-platform. You can download it for the linux, windows and mac os.

The shape collage is very simple and well made and with an interface very intuitive.

The process of creating the final image is very simple: you choose the photos individually (or an entire folder), you set various parameters (shape, spacing, size, colors, etc.) and you save the final image.

 
Feature:
    * Create picture collages in less than a minute with just a few mouse clicks
    * Photos are automatically and intelligently placed using a very fast patent pending method
    * Use photos on your computer or from the web to make collages
    * Make collages in any imaginable shape or form
    * Rectangle, heart, circle, letters, or even draw your own shape!
    * Adjust the collage size, size of the photos, number of photos, and spacing between photos
    * Change the background, the colour of the border, and more
    * Save as JPEG, PNG, or Photoshop PSD
    * Cross-platform and free for personal use with no ads, viruses, spyware, stds, trial periods, or watermarks
Read more
1

HowTo Autostart application in KDE4

The easiest way to make application autostart in KDE is to create executable script within autostart folder.

Go to your Autostart folder, in case of default Autostart path you should just execute:

cd .kde4/Autostart/

Create a new file withing this folder (the name doesn't matter) using any editor, for example you can execute:  vi start.script

The first line of your new file needs to be

#!/bin/bash

This makes the file a script (you will make it executable later).

Now in each line specify what do you want KDE to execute each time it starts up. For example if we you want to start ktorrent every time, add the command into this file:

ktorrent

After putting all lines save changes to this file and exit
The last thing you need to do is make this script executable, so just execute:

chmod +x start.script

At this moment you may reboot KDE to see the changes during startup. Please take a note that this only affects current user.
Read more
2

Internet Explorer is helping users (at last)

Spending millions of dollars, Microsoft came up with a new version of Internet Explorer (not really):

 
Picture Source: DeadDog
Read more
0

HowTo add Repositories into OpenSuSe

Go to terminal and type command : yast2 to start the yast application
Select Software Repositories.

 
Click on Add, select Specify URL.

 
Then enter a Repository Name and a URL.

 
Package metadata will be downloaded and parsed - this takes time depending on mirror speed, your bandwidth, the size of the repository, the speed of your system.
Read more
0

HowTo: Regular cleanup the Tempfolders (OpenSuSe)

If you would like to cleanup regularly your System, here are the steps

Launch YaST:
Choose Category “System”, then “Editor for /etc/sysconfig Files”.
Then open “System” and after that open “Cron”.
Here you find the following Entrys:
CLEAR_TMP_DIRS_AT_BOOTUP = “no”
Change to: “yes
That means: enable or disable the Cleanup Function global.
Default: “no” (=off!)
TMP_DIRS_TO_CLEAR = “/tmp
That you can leave.
That means: First List from Folders, that can be deleted.
Default: /tmp
MAX_DAYS_IN_TMP = 0
Change to: “1″ (=deletes all Files who are older than 24h)
Means: Max. File Lifetime in days. That is for the Files, defined in TMP_DIRS_TO_CLEAR.
Default: 0 (=disabled)
Close the Editor. You see all changes and you can close the Editor.
In the next Systemstart cron cleans up your /tmp.
You can define an second List:
LONG_TMP_DIRS_TO_CLEAR = “”
Means: second Folderlist who can be deleted.
Default: “”
Possible Option: “/var/tmp

MAX_DAYS_IN_LONG_TMP = 0
Means: Max File Lifetime in days, for all Files who are defined in LONG_TMP_DIRS_TO_CLEAR.
Default: 0 (= disabled)
Possible Option: “28

 Based on Reinhard Haase’s HowTo on: http://tiefenwirkung.wordpress.com/2009/07/19/temporare-dateien-unter-linux/
Read more
0

How to query Yahoo search engine and get results from Perl script

First, like all non-standard Perl modules, you will need to install Yahoo::Search perl module from CPAN, go to cpan> console and install Yahoo::Search perl module

      cpan> install Yahoo::Search

Here is the simple perl script that use the Yahoo::Search perl module to query the yahoo search engine and get the results and display it on the console...

#!/usr/bin/perl
use Yahoo::Search;

my @results = Yahoo::Search->Results(
Doc => "linuxpoison",
AppId => "YahooDemo",
Start => 0,
Count => 5);

for my $result (@results) {
printf "Result: #%d\n", $result->I + 1,
printf "Url:%s\n", $result->Url;
printf "%s\n", $result->ClickUrl;
printf "Summary: %s\n", $result->Summary;
printf "Title: %s\n", $result->Title;
print "\n";
}
The Start and Count options let you pull the results in pages. Count tells Yahoo how many results to return at once, and Start determines the starting record, Doc is the search query

AppId => "YahooDemo" is the generic demo ID through which you can query the Yahoo search engine.

Save the file with name "yahoo_search.perl" make it executable (chmod 755 yahoo_search.perl) and execute it using command perl yahoo_search.perl to see something like ...


There are still lot of thing that you can do with this script like integrating with your application, adding exception handling, formatting the output and many more but this is a good starting point.

read more about Yahoo:Search perl module
Read more
2

How to Install Perl modules

The simplest way to get Perl modules installed is to use the CPAN module itself. If you are the system administrator and want to install the module system-wide, you'll need to switch to your root user. To fire up the CPAN module, just get to your command line and run this:

    # perl -MCPAN -e shell

If this is the first time you've run CPAN, it's going to ask you a series of questions - in most cases the default answer is fine and finally you should see the cpan> command prompt, installing a module is as easy as install MODULE::NAME - for example, to install the HTML::Template module you'd type:

    cpan> install HTML::Template

CPAN should take it from there and you'll wind up with the module installed into your Perl library.

Another way -- Let's say you're on your system command line and you just want to install a module as quickly as possible - you can run the Perl CPAN module via command line perl and get it installed in a single line:

    # perl -MCPAN -e 'install HTML::Template'

Another way --  is to grab the required file using wget. Next you'll want to unzip it with something like:

    # tar -zxvf HTML-Template-2.8.tar.gz

This will unzip the module into a directory, then you can move in and look for the README or INSTALL files. In most cases, installing a module by hand is still pretty easy, though (although not as easy as CPAN). Once you've switched into the base directory for the module, you should be able to get it installed by typing:

    # perl Makefile.PL
    # make
    # make test
    # make install
Read more
0

Getting System Information (OpenSuSe) - phpSysInfo

phpSysInfo is a PHP script that displays information about the host being accessed.
It will displays things like Uptime, CPU, Memory, SCSI, IDE, PCI, Ethernet, Floppy, and Video Information.

Why I need the phpSysinfo?, Like the description above. I need to know everything about the server with one click and the important is I could check them with the browser. So, now the phpSysinfo is the best choice.

Installation:
Download phpSysInfo - here

Extract the package and copy it to the web server directory
# tar -zxvf phpSysInfo-3.0-RC8.tar.gz
# cp -r phpsysinfo /srv/www/htdocs/phpsysinfo
Rename the config.php.new to config.php (inside phpsysinfo directory)
# mv config.php.new config.php
Now go to browser and access the "index.php" from within phpSysInfo directory and you should see something like ...
Read more
6

Squid Authentication using RADIUS

Radius is a server for remote user authentication and accounting. Its primary use is for Internet Service Providers, though it may as well be used on any network that needs a centralized authentication and/or accounting service for its workstations.

In this article I wont go into detail about installation and configuration of RADIUS or Squid server and will assume that both are installed and configure properly.

Download the squid authenticating module -- Here.
Unpack it and compile it
# tar -zxvf squid_radius_auth-1.10.tar.gz
# cd squid_radius_auth-1.10/
# make
You will get a squid_radius_auth executable that you can move to a safe place. It needs a config file, squid_radius_auth that should contain the name of the RADIUS server and the secret:
    server radius_server
    secret secret_phrase
Now, configure Squid to use RADIUS server for Authentication, open your squid.conf file and find and replace the auth section with following ...
    auth_param basic program /path_to_auth/squid_radius_auth
    auth_param basic children 5
    auth_param basic realm Please enter your domain credentials
    auth_param basic credentialsttl 8 hours
Next you have to condition Squid to allow only authenticated users. In the following example users that are in the local LAN are allowed without logging in but users that don't show up in the local users file (localusers) are asked to login:
    acl passwd proxy_auth
    acl localusers src "/etc/squid/localusers"

    http_access allow localusers
    http_access allow all passwd
    http_access allow all
You'll also have a log of who and when logged on to use the web services on the RADIUS server's logs.
Read more
1

SQL client and front-end for multiple database- crunchyfrog

CrunchyFrog is a SQL client and schema browser mainly (but not solely) for the GNOME desktop. It's written in Python (PyGTK) and licensed under the GNU General Public License v3.

Features
    * Supports various databases (PostreSQL, MySQL, SQLite, Oracle, SQLServer, Firebird, Informix, MaxDB).
    * Lightweight user interface for daily tasks.
    * SQL editor with syntax highlighting, auto completion and SQL formatting.
    * Export results to CSV and OpenOffice.
    * Inspect database objects.
    * Supports multiple database connections at once (e.g. for switching between development and production environments).

Ubuntu / Debian:
To install CrunchyFrog add the following lines to /etc/apt/source.list:

deb http://ppa.launchpad.net/crunchyfrog/ppa/ubuntu jaunty main
deb-src http://ppa.launchpad.net/crunchyfrog/ppa/ubuntu jaunty main

Replace "jaunty" with "intrepid" or "karmic" if you're running Intrepid or karmic.

To import the repositories GPG key:
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 266d5f41c7f166d8

Finally to install CrunchyFrog run:
sudo aptitude update
sudo aptitude install crunchyfrog crunchyfrog-gnome

Read more
1

Limit the CPU usage of an application (process) - cpulimit

cpulimit is a simple program that attempts to limit the cpu usage of a process (expressed in percentage, not in cpu time). This is useful to control batch jobs, when you don't want them to eat too much cpu. It does not act on the nice value or other scheduling priority stuff, but on the real cpu usage. Also, it is able to adapt itself to the overall system load, dynamically and quickly.

Installation:
Download last stable version of cpulimit
Then extract the source and compile with make:

    tar zxf cpulimit-xxx.tar.gz
    cd cpulimit-xxx
    make

Executable file name is cpulimit. You may want to copy it in /usr/bin.

Usage:
Limit the process 'bigloop' by executable name to 40% CPU:

    cpulimit --exe bigloop --limit 40
    cpulimit --exe /usr/local/bin/bigloop --limit 40

Limit a process by PID to 55% CPU:

    cpulimit --pid 2960 --limit 55

cpulimit should run at least with the same user running the controlled process. But it is much better if you run cpulimit as root, in order to have a higher priority and a more precise control.

Note:
If your machine has one processor you can limit the percentage from 0% to 100%, which means that if you set for example 50%, your process cannot use more than 500 ms of cpu time for each second. But if your machine has four processors, percentage may vary from 0% to 400%, so setting the limit to 200% means to use no more than half of the available power. In any case, the percentage is the same of what you see when you run top.
Read more
0

Windows "Screen Of Death" now on Linux (GDM Theme)

Here is fantastic GDM Theme based on the famous M$ Screen Of Death.


Download "Screen Of Death" GDM
Read more
1

Improving filesystem read performance using "noatime"

Linux records information about when files was last accessed. There is a cost associated with recording the last access time. The ext3 file system of Linux has an attribute that allows the super-user to mark individual filesystem to ignore recording of last access time. This may lead to significant performance improvements on often accessed frequently changing files such as the contents of the web server directory.

The only drawback is that none of the file's atime (access time) will be updated.

Linux has a special mount option for file systems called "noatime" that can be added to each line that addresses one file system in the /etc/fstab file. If a file system has been mounted with this option, reading accesses to the file system will no longer result in an update to the atime (access time) information associated with the file.

The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read. Since writes can be somewhat expensive, this can result in measurable performance gains.

Note that the write time information to a file will continue to be updated anytime the file is written to.

Edit the fstab file vi /etc/fstab and add in the line that you are interested in adding the "noatime" option show below:

/dev/sda1          /var/www          ext3          defaults,noatime          1  2
Read more
3

Auto reboot after kernel panic

When a kernel encounters certain errors it calls the "panic" function which results from a unrecoverable error. This panic results in LKCD (Linux Kernel Crash Dump) initiating a kernel dump where kernel memory is copied out to the pre-designated dump area. The dump device is configured as primary swap by default. The kernel is not completely functional at this point, but there is enough functionality to copy memory to disk. When the system boots back up, it checks for a new crash dump. If a new crash dump is found it is copied from the dump location to the file system, "/var/log/dump" directory by default. After copying the image, the system continues to boot normally and forensics can be performed at a later date.


By default after a kernel panic, system just waits there for a restart.  This is because of the value set on "kernel.panic" parameter.


# cat /proc/sys/kernel/panic
0

To disable this and make the Linux OS reboot after a kernel panic, we have to set an integer value greater than zero to the paramter "kernel.panic", where the value is the number of seconds to wait before a automatic reboot.  For example , if you set it to "10" , then the system waits for 10 seconds before automatic reboot. To make this permanent, edit /etc/sysctl.conf and add following like at end of the file.

kernel.panic = 10

Read more
4

How do Flush DNS Cache in Linux

Most DNS clients cache the results of name resolution requests. This speeds up name resolution if multiple lookups are done to the same address, such as is common when browsing the web.

Sometimes a bad DNS entry will be cached and you will need to either flush the DNS cache to get rid of it, or wait up to 24 hours for it to be dropped from the cache automatically.

In Linux, the nscd daemon manages the DNS cache.
To flush the DNS cache, restart the nscd daemon using command: # /etc/init.d/nscd restart
Read more
0

Multi-protocol, Multi-source, Fast and Reliable Download Utility - aria2

aria2 is a multi-protocol & multi-source, cross platform download utility. The supported protocols are HTTP(S), FTP, BitTorrent  (DHT, PEX, MSE/PE), and Metalink.

aria2 can download a file from multiple sources/protocols and tries to utilize your maximum download bandwidth. It supports downloading a file from HTTP(S)/FTP and BitTorrent at the same time, while the data downloaded from HTTP(S)/FTP is uploaded to the BitTorrent swarm. Using Metalink's chunk checksums, aria2 automatically validates chunks of data while downloading a file like BitTorrent.

There are other alternative applications. But aria2 has 2 distinctive features:

(1) aria2 can download a file from several URIs(HTTP(S)/FTP/BitTorrent)
(2) If you give aria2 a list of URIs, aria2 downloads them concurrently.

You don't have to wait for the current download queue to finish one file at a time anymore. aria2 tries to utilize your maximum download bandwidth and downloads files quickly.

Installation: aria2 binary packages are now available on the variety of platforms.
Debian & Ubuntu: apt-get install aria2
Fedora: yum install aria2
OpenSuse 11.1: "1-click" installer for aria2
Mandriva: urpmi aria2

Usage:
aria2 uses 5 connections to download 1 file by default. To limit the number of connections to, say, just 1, use -s1 option.

Note: To pause a download, press Ctrl-C. You can resume the transfer by running aria2c with the same arguments at the same directory. You can change URIs as long as they are pointing to the same file.

Download a file:
aria2c http://host/image.iso

Download a file using 2 connections:
aria2c -s2 http://host/image.iso http://mirror1/image.iso http://mirror2/image.iso

Download a file from HTTP and FTP servers:
aria2c http://host1/file.zip ftp://host2/file.zip
Read more
0

Query Apache logfiles via SQL

The Apache SQL Analyser (ASQL) is designed to read Apache log files and dynamically convert them to SQLite format so you can analyse them in a more meaningful way. Using the cut, uniq and wc commands, you can parse a log file by hand to figure out how many unique visitors came to your site, but using Apache SQL Analyser is much faster and means that the whole log gets parsed only once. Finding unique addresses is as simple as a SELECT DISTINCT command.

In terms of requirements you will need only the Perl modules for working with SQLite database, and the Term::Readline module. Upon a Debian system you may install both via:

apt-get install libdbd-sqlite3-perl libterm-readline-gnu-perl

Usage
Once installed, either via the package or via the source download, please start the shell by typing "asql". Once the shell starts you have several commands available to you, enter help for a complete list. The three most commonly used commands would be: load, select & show

The following sample session provides a demonstration of typical usage of the shell, it demonstrates the use of the alias command which may be used to create persistent aliases:

asql v0.6 - type 'help' for help.
asql> load /var/logs/apache/access.log
Loading: /var/logs/apache/access.log
sasql> select COUNT(id) FROM logs
46
asql> alias hits SELECT COUNT(id) FROM logs
ALIAS hits SELECT COUNT(id) FROM logs
asql> alias ips SELECT DISTINCT(source) FROM logs;
ALIAS ips SELECT DISTINCT(source) FROM logs;
asql> hits
46
asql> alias
ALIAS hits SELECT COUNT(id) FROM logs
ALIAS ips SELECT DISTINCT(source) FROM logs;

(User-input is in bold for emphasis)
Read more
Related Posts with Thumbnails