10 New Open Source Projects You May Not Know About

Posted by Unknown Minggu, 29 April 2012 0 komentar
http://www.pcworld.com/businesscenter/article/248514/10_new_open_source_projects_you_may_not_know_about.html


With so many open source software projects under way at any given moment, it can be difficult to keep tabs on all that's going on.
linuxFirefox, Linux, LibreOffice, and the partially open Android platform may dominate the lion's share of the headlines, but there are countless lesser-known open source efforts that are equally worthy of attention.
Want a few examples? Open source-focused provider Black Duck Software this past week announced the winners of its fourth annual Open Source Rookies of the Year. Included in the list are a bunch of new projects that are worth watching.
10 Up-and-Comers
To come up with its list, Black Duck used data on open source projects from sources including Ohloh.net. It says it reviewed thousands of open source projects started in 2011.
Winners were chosen using a weighted scoring system that awarded points based on commit activity (the number of changes made to the software per day), the size of the project team, and the number of in-bound links to the project.
Without further ado, here are the winners Black Duck came up with:
1. Bootstrap, a toolkit from Twitter designed to kick-start development of Web applications and sites;
2. BrowserID, a secure, decentralized, open source, cross-browser way to sign onto websites based on the user's email address;
3. Canvas, billed by Black Duck as “the only commercial open source learning management system and the only LMS native to the cloud”;
4. Cloud Foundry, an open Platform-as-a-Service (PaaS) providing a choice of clouds, developer frameworks, and application services;
5. Moai, a mobile platform for game developers that offers cloud-based game services and rapid development of iOS, Android, and Chrome titles using the Lua scripting language;
6.Mooege, an open source educational game server emulator;
7. OpenShift, a free, auto-scaling Platform-as-a-Service (PaaS) from Red Hat;
8. Orion, a browser-based open tool integration platform built by the Eclipse platform team;
9. rstat.us, a microblogging platform that's set apart by its simplicity and openness, Black Duck says; and
10. Salt, an open source configuration management and remote execution application.
'Cloud, Mobile, and Gaming'
"The data underlying the 2011 Open Source Rookies list is consistent with shifts we see in our day-to-day business, where cloud, mobile, and gaming draw great support from involved communities of open source developers," said Tim Yeaton, president and CEO of Black Duck Software.
Indeed, open source software such as Linux is increasingly at the forefront of innovation in many enterprises, as recent survey data has shown. Over the upcoming year, these 10 projects would probably be worth keeping an eye on.

Baca Selengkapnya ....

10 Reasons Open Source Is Good for Business

Posted by Unknown 0 komentar
http://www.pcworld.com/businesscenter/article/209891/10_reasons_open_source_is_good_for_business.html


With the many business and government organizations that now use open source software such as Linux, it's becoming increasingly clear that price is not the only advantage such software holds. If it were, companies that adopted it during the Great Recession would surely have switched back to the expensive proprietary stuff as soon as conditions began to ease, and that's clearly not the case.
Rather, free and open source software (FOSS) holds numerous other compelling advantages for businesses, some of them even more valuable than the software's low price. Need a few examples? Let's start counting.
1. Security
It's hard to think of a better testament to the superior security of open source software than the recent discovery by Coverity of a number of defects in the Android kernel. What's so encouraging about this discovery, as I noted the other day, is that the only reason it was possible is that the kernel code is open to public view.
Android may not be fully open source, but the example is still a perfect illustration of what's known as "Linus' Law," named for Linus Torvalds, the creator of Linux. According to that maxim, "Given enough eyeballs, all bugs are shallow." What that means is that the more people who can see and test a set of code, the more likely any flaws will be caught and fixed quickly. It's essentially the polar opposite of the "security through obscurity" argument used so often to justify the use of expensive proprietary products, in other words.
Does the absence of such flaw reports about the code of the iPhone or Windows mean that such products are more secure? Far from it--quite the opposite, you might even say.
All it means is that those products are closed from public view, so no one outside the companies that own them has the faintest clue how many bugs they contain. And there's no way the limited set of developers and testers within those companies can test their products as well as the worldwide community constantly scrutinizing FOSS can.
Bugs in open source software also tend to get fixed immediately, as in the case of the Linux kernel exploit uncovered not long ago.
In the proprietary world? Not so much. Microsoft, for example, typically takes weeks if not months to patch vulnerabilities such as the recently discovered Internet Explorer zero-day flaw. Good luck to all the businesses using it in the meantime.
2. Quality
Which is more likely to be better: a software package created by a handful of developers, or a software package created by thousands of developers? Just as there are countless developers and users working to improve the security of open source software, so are there just as many innovating new features and enhancements to those products.
In general, open source software gets closest to what users want because those users can have a hand in making it so. It's not a matter of the vendor giving users what it thinks they want--users and developers make what they want, and they make it well. At least one recent study has shown, in fact, that technical superiority is typically the primary reason enterprises choose open source software.
3. Customizability
Along similar lines, business users can take a piece of open source software and tweak it to suit their needs. Since the code is open, it's simply a matter of modifying it to add the functionality they want. Don't try that with proprietary software!
4. Freedom
When businesses turn to open source software, they free themselves from the severe vendor lock-in that can afflict users of proprietary packages. Customers of such vendors are at the mercy of the vendor's vision, requirements, dictates, prices, priorities and timetable, and that limits what they can do with the products they're paying for.
With FOSS, on the other hand, users are in control to make their own decisions and to do what they want with the software. They also have a worldwide community of developers and users at their disposal for help with that.
5. Flexibility
When your business uses proprietary software such as Microsoft Windows and Office, you are on a treadmill that requires you to keep upgrading both software and hardware ad infinitum. Open source software, on the other hand, is typically much less resource-intensive, meaning that you can run it well even on older hardware. It's up to you--not some vendor--to decide when it's time to upgrade.
6. Interoperability
Open source software is much better at adhering to open standards than proprietary software is. If you value interoperability with other businesses, computers and users, and don't want to be limited by proprietary data formats, open source software is definitely the way to go.
7. Auditability
With closed source software, you have nothing but the vendor's claims telling you that they're keeping the software secure and adhering to standards, for example. It's basically a leap of faith. The visibility of the code behind open source software, however, means you can see for yourself and be confident.

8. Support Options
Open source software is generally free, and so is a world of support through the vibrant communities surrounding each piece of software. Most every Linux distribution, for instance, has an online community with excellent documentation, forums, mailing lists, forges, wikis, newsgroups and even live support chat.
For businesses that want extra assurance, there are now paid support options on most open source packages at prices that still fall far below what most proprietary vendors will charge. Providers of commercial support for open source software tend to be more responsive, too, since support is where their revenue is focused.
9. Cost
Between the purchase price of the software itself, the exorbitant cost of mandatory virus protection, support charges, ongoing upgrade expenses and the costs associated with being locked in, proprietary software takes more out of your business than you probably even realize. And for what? You can get better quality at a fraction of the price.
10. Try Before You Buy
If you're considering using open source software, it will typically cost you nothing to try it out first. This is partly due to the software's free price, and partly due to the existence of LiveCDs and Live USBs for many Linux distributions, for example. No commitment required until you're sure.
None of this is to say, of course, that your business should necessarily use open source software for everything. But with all the many benefits it holds, you'd be remiss not to consider it seriously.

Baca Selengkapnya ....

When You Should Disable Root Login…Or Not

Posted by Unknown Kamis, 26 April 2012 0 komentar
http://tuts.pinehead.tv/2012/04/20/when-you-should-and-should-not-disable-root-login


When should you disable root login? Disabling root login is a super easy trick to increase security on your machine. Let’s take a look at why and when you should disable the root login and also when it’s OK to keep root login enabled. Root or administrator users are the default users on almost all systems. By their name, we know that they have all privileges on the machine and control everything. In previous articles I’ve suggested several times that disabling root login and created root privilege users is a good security practice but in realty you don’t always have to do this. Let’s first look at when it is best practice to disable root login.

Can your server be accessed by anyone on the internet?
What does this mean? Well, for example, if you can be on any computer at any location on the internet and SSH to your machine, then your server can be accessed by anyone on the internet. Since the root user has all the powers AND we know that almost every Linux machine comes with the root user enabled, then guessing or trying to crack the root users password is the basic place to start if you’re trying to penetrate a system.
Bots can automatically scan for the open SSH port and start trying to access your system using the root user and random passwords trying to break in. By creating a user with root privileges and disabling the root login, you remove this from the equation. Bots (or people) generally aren’t out there trying to guess usernames AND passwords, so this increases the security of your system.
When is it OK to leave root login enabled?
As several Pinehead members in the Pinehead community have pointed out, it is OK at times to leave the root user enabled. Again, I will say it is BEST practice to change the root username and/or disable the root password. However, if your server is offline and is only located on a local network you don’t have to worry about hackers or bots trying to penetrate your system. By the very nature of a local network they don’t even have access to the system to even try accessing it as root.
If you can only access your servers from a VPN
Again, this is the same as your servers being only accessible on the local network. VPNs create another layer of protection. In order to access the VPN you’ll need permissions, from there you can login to your server.
All login is disabled except from console
Let’s say your server is behind a firewall that only allows access to port 80. Then port 22 (SSH) isn’t even available to the whole internet for someone or something trying to penetrate your system. You could also just remove the services that allow remote login. If you only allow access to the server via console login (being physically in front of the server) then there is no reason to disable root login.
Last but not least…
You don’t mind taking the chances of a break-in
If you just don’t care that someone or something “could” ever break in or you think the odds are against it, then leave it open. The odds are rather low that this will happen to you, but that doesn’t mean it won’t or that you shouldn’t take some steps to protect your system. This includes disabling root login, only allowing login at the console, putting your server behind a VPN, or making your server available only on the local network.
At the end of the day it’s easier to just disable root login via ssh as suggested in a previous tut: Disable Root Login via SSH or to allow access via ssh keys.
Have a suggestion to this? Or other suggestions on protecting your system? Post them in the comments.

Baca Selengkapnya ....

How To Set Up WebDAV With MySQL Authentication On Apache2 (Debian Squeeze)

Posted by Unknown 0 komentar
http://www.howtoforge.com/how-to-set-up-webdav-with-mysql-authentication-on-apache2-debian-squeeze


This guide explains how to set up WebDAV with MySQL authentication (using mod_auth_mysql) on Apache2 on a Debian Squeeze server. WebDAV stands for Web-based Distributed Authoring and Versioning and is a set of extensions to the HTTP protocol that allow users to directly edit files on the Apache server so that they do not need to be downloaded/uploaded via FTP. Of course, WebDAV can also be used to upload and download files.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

I'm using a Debian Squeeze server with the hostname server1.example.com and the IP address 192.168.0.100 here.

2 Installing Apache2, WebDAV, MySQL, mod_auth_mysql

To install Apache2, WebDAV, MySQL, and mod_auth_mysql, we run:
apt-get install apache2 mysql-server mysql-client libapache2-mod-auth-mysql
You will be asked to provide a password for the MySQL root user - this password is valid for the user root@localhost as well as root@server1.example.com, so we don't have to specify a MySQL root password manually later on:
New password for the MySQL "root" user: <-- yourrootsqlpassword
Repeat password for the MySQL "root" user: <-- yourrootsqlpassword
Afterwards, enable the WebDAV and mod_auth_mysql modules:
a2enmod dav_fs
a2enmod dav
a2enmod auth_mysql
Restart Apache:
/etc/init.d/apache2 restart

3 Creating A Virtual Host

I will now create a default Apache vhost in the directory /var/www/web1/web. For this purpose, I will modify the default Apache vhost configuration in /etc/apache2/sites-available/default. If you already have a vhost for which you'd like to enable WebDAV, you must adjust this tutorial to your situation.
First, we create the directory /var/www/web1/web and make the Apache user (www-data) the owner of that directory:
mkdir -p /var/www/web1/web
chown www-data /var/www/web1/web
Then we back up the default Apache vhost configuration (/etc/apache2/sites-available/default) and create our own one:
mv /etc/apache2/sites-available/default /etc/apache2/sites-available/default_orig
vi /etc/apache2/sites-available/default

ServerAdmin webmaster@localhost

DocumentRoot /var/www/web1/web/

Options Indexes MultiViews
AllowOverride None
Order allow,deny
allow from all


Then reload Apache:
/etc/init.d/apache2 reload

4 Configure The Virtual Host For WebDAV

You can find the documentation for mod_auth_mysql in the /usr/share/doc/libapache2-mod-auth-mysql directory. To read it, you have to gunzip the DIRECTIVES.gz and USAGE.gz files:
cd /usr/share/doc/libapache2-mod-auth-mysql
gunzip DIRECTIVES.gz
vi DIRECTIVES
gunzip USAGE.gz
vi USAGE
Having read these two files, we create a MySQL database called webdav in which we will create the table mysql_auth which will contain our users and passwords. In addition to that we create the MySQL user webdav_admin - this user will be used by mod_auth_mysql to connect to MySQL later on:
mysqladmin -u root -p create webdav
mysql -u root -p
GRANT SELECT, INSERT, UPDATE, DELETE ON webdav.* TO 'webdav_admin'@'localhost' IDENTIFIED BY 'webdav_admin_password';
GRANT SELECT, INSERT, UPDATE, DELETE ON webdav.* TO 'webdav_admin'@'localhost.localdomain' IDENTIFIED BY 'webdav_admin_password';
FLUSH PRIVILEGES;
(Replace webdav_admin_password with a password of your choice.)
USE webdav;
create table mysql_auth (
username char(25) not null,
passwd char(32),
groups char(25),
primary key (username)
);
(Of course, you can as well use existing tables holding your user credentials, and you can as well have additional fields in the table, such as a field that defines if a user is active or not, for example.)
CREATE TABLE `scoreboard` (
`id` int(14) NOT NULL auto_increment,
`vhost` varchar(50) NOT NULL default '',
`bytes_sent` int(14) NOT NULL default '0',
`count_hosts` int(12) NOT NULL default '0',
`count_visits` int(12) NOT NULL default '0',
`count_status_200` int(12) NOT NULL default '0',
`count_status_404` int(12) NOT NULL default '0',
`count_impressions` int(18) NOT NULL default '0',
`last_run` int(14) NOT NULL default '0',
`month` int(4) NOT NULL default '0',
`year` int(4) NOT NULL default '0',
`domain` varchar(50) NOT NULL default '',
`bytes_receive` int(14) NOT NULL default '0',
PRIMARY KEY (`id`),
UNIQUE KEY `vhost` (`vhost`,`month`,`year`,`domain`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
Now we insert the user test into our mysql_auth table with the password test (MD5 encrypted); this user belongs to the group testgroup:
INSERT INTO `mysql_auth` (`username`, `passwd`, `groups`) VALUES('test', MD5('test'), 'testgroup');
You can later on use the URL http://192.168.0.100/webdav to connect to WebDAV. If you do this on a Windows XP client and type in the user name test, Windows translates this to 192.168.0.100\test. Therefore we create a second user account now:
INSERT INTO `mysql_auth` (`username`, `passwd`, `groups`) VALUES('192.168.0.100\\test', MD5('test'), 'testgroup');
(We must use a second backslash here in the user name to escape the first one!)
You don't have to do this if you specify the port in the WebDAV URL, e.g. http://192.168.0.100:80/webdav - in this case Windows will simply look for the user test, not 192.168.0.100\test.
Then we leave the MySQL shell:
quit;
Now we modify our vhost in /etc/apache2/sites-available/default and add the following lines to it:
vi /etc/apache2/sites-available/default
[...]
Alias /webdav /var/www/web1/web

DAV On
AuthBasicAuthoritative Off
AuthUserFile /dev/null
AuthMySQL On
AuthName "webdav"
AuthType Basic
Auth_MySQL_Host localhost
Auth_MySQL_User webdav_admin
Auth_MySQL_Password webdav_admin_password
AuthMySQL_DB webdav
AuthMySQL_Password_Table mysql_auth
Auth_MySQL_Username_Field username
Auth_MySQL_Password_Field passwd
Auth_MySQL_Empty_Passwords Off
Auth_MySQL_Encryption_Types PHP_MD5
Auth_MySQL_Authoritative On
require valid-user

[...]
The Alias directive makes (together with ) that when you call /webdav, WebDAV is invoked, but you can still access the whole document root of the vhost. All other URLs of that vhost are still "normal" HTTP.
The AuthBasicAuthoritative Off and AuthUserFile /dev/null are there to prevent that you get errors like these ones in your Apache error log (/var/log/apache2/error.log):
[Wed Jun 11 17:02:45 2008] [error] Internal error: pcfg_openfile() called with NULL filename
[Wed Jun 11 17:02:45 2008] [error] [client 127.0.0.1] (9)Bad file descriptor: Could not open password file: (null)
If you have additional fields in your MySQL table that define if a user is allowed to log in or not (e.g. a field called active), you can add the Auth_MySQL_Password_Clause directive, e.g.:
[...]
Auth_MySQL_Password_Clause " AND active=1"
[...]
(It is important that the string within the quotation marks begins with a space!)
The require valid-user directive makes that each user listed in the mysql_auth table can log in as long as he/she provides the correct password. If you only want certain users to be allowed to log in, you'd use something like
[...]
require user jane joe
[...]
instead. And if you only want members of certain groups to be allowed to log in, you'd use something like this:
[...]
require group testgroup
[...]
The final vhost should look like this:

ServerAdmin webmaster@localhost

DocumentRoot /var/www/web1/web/

Options Indexes MultiViews
AllowOverride None
Order allow,deny
allow from all


Alias /webdav /var/www/web1/web

DAV On
AuthBasicAuthoritative Off
AuthUserFile /dev/null
AuthMySQL On
AuthName "webdav"
AuthType Basic
Auth_MySQL_Host localhost
Auth_MySQL_User webdav_admin
Auth_MySQL_Password webdav_admin_password
AuthMySQL_DB webdav
AuthMySQL_Password_Table mysql_auth
Auth_MySQL_Username_Field username
Auth_MySQL_Password_Field passwd
Auth_MySQL_Empty_Passwords Off
Auth_MySQL_Encryption_Types PHP_MD5
Auth_MySQL_Authoritative On
require valid-user

Reload Apache afterwards:
/etc/init.d/apache2 reload

5 Testing WebDAV

We will now install cadaver, a command-line WebDAV client:
apt-get install cadaver
To test if WebDAV works, type:
cadaver http://localhost/webdav/
You should be prompted for a user name. Type in test and then the password for the user test. If all goes well, you should be granted access which means WebDAV is working ok. Type quit to leave the WebDAV shell:
root@server1:~# cadaver http://localhost/webdav/
Authentication required for webdav on server `localhost':
Username: test
Password:
dav:/webdav/> quit
Connection to `localhost' closed.
root@server1:~#
Now test again with the username 192.168.0.100\test (this is the format that Windows XP needs if you don't use :80 in the WebDAV URL):
cadaver http://localhost/webdav/
root@server1:~# cadaver http://localhost/webdav/
Authentication required for webdav on server `localhost':
Username: 192.168.0.100\test
Password:
dav:/webdav/> quit
Connection to `localhost' closed.
root@server1:~#

6 Configure A Windows XP Client To Connect To The WebDAV Share

This is described on http://www.howtoforge.com/how-to-set-up-webdav-with-apache2-on-debian-lenny-p2.
If you don't use :80 in the WebDAV URL (http://192.168.0.100:80/webdav), you must log in with the username 192.168.0.100\test; if you do use :80, then you can simply log in with the username test.

7 Configure A Linux Client (GNOME) To Connect To The WebDAV Share

This is described on http://www.howtoforge.com/how-to-set-up-webdav-with-apache2-on-debian-lenny-p3.

8 Troubleshooting

It's a good idea to watch the Apache error log (/var/log/apache2/error.log) while you're trying to connect to WebDAV, e.g. with this command:
tail -f /var/log/apache2/error.log
If you get an error like this:
[Wed Jun 11 15:39:04 2008] [error] [client 192.168.0.46] (13)Permission denied: Could not open property database. [500, #1]
this means that /var/lock/apache2 is not owned by the Apache user (www-data on Debian). You can fix this problem by running:
chown www-data /var/lock/apache2
If Windows keeps asking and asking about the username and password, you should specify the port in the WebDAV URL, e.g. http://192.168.0.100:80/webdav (see chapter four).

9 Links



Baca Selengkapnya ....

Maintaining Remote Web Sites With sitecopy (Debian Squeeze/Ubuntu 11.10)

Posted by Unknown 0 komentar

sitecopy is a tool for copying locally stored web sites to a remote web server (using FTP or WebDAV). It helps you to keep the remote site synchronized with your local copy by uploading modified local files and deleting remote files that have been deleted on the local computer. This tutorial shows how you can manage your remote web site from your local Debian Squeeze/Ubuntu 11.10 desktop with sitecopy.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

I'm using the username falko on my local Debian Squeeze/Ubuntu 11.10 desktop (I'm logged in on my local Linux desktop as that user - please don't log in as root). The files for the remote web site example.com are stored in the directory /home/falko/sites/example.com/ on the local computer. The remote document root is /var/www/example.com/web/.
You can use sitecopy with FTP and WebDAV, so you should either have an FTP or a WebDAV account on the remote server. I'm using the FTP/WebDAV username defaultfalko and the password howtoforge here.

2 Installing sitecopy

sitecopy can be installed on the local desktop as follows (we need root privileges, therefore we use sudo):
sudo apt-get install sitecopy
You should now take a look at sitecopy's man page to familiarize yourself with its options:
man sitecopy

3 Configuring sitecopy

Go to your home directory on the local desktop...
cd ~
... and create the directory .sitecopy with permissions of 700 (sitecopy uses that directory to store file details):
mkdir -m 700 .sitecopy
Next create the sitecopy configuration file .sitecopyrc:
touch .sitecopyrc
chmod 600 .sitecopyrc
Open the file...
vi .sitecopyrc
... and fill in the configuration for the example.com site. Here are two examples, one for FTP...
site example.com
server example.com
username defaultfalko
password howtoforge
local /home/administrator/sites/example.com/
remote ~/web/
exclude *.bak
exclude *~
... and one for WebDAV:
site example.com
server example.com
protocol webdav
username defaultfalko
password howtoforge
local /home/administrator/sites/example.com/
remote /var/www/example.com/web/
exclude *.bak
exclude *~
(You can define a stanza for each web site you want to manage with sitecopy.)
The site directive must be followed by a name for the web site - you can freely choose one, e.g. example.com or mysite. This name will be used later on in the sitecopy commands. The following configuration options that belong to that site must be indented!
Most of the following configuration options are self-explaining. The default protocol is FTP; if you want to use WebDAV, please specify protocol webdav. The local ditective contains the local path of the web site copy, remote contains the path of the web site on the remote server - it can be absolute or relative. If your user is chrooted (as is normally the case with FTP users), you should use a relative path (such as ~/ or ~/web). Otherwise use an absolute path.
The exclude lines are optional, they are here just to demonstrate how you can exclude files from being maintained by sitecopy.
You can find out more about sitecopy configuration on its man page:
man sitecopy

4 First Usage

Before you use sitecopy for the first time, you have to decide which of the following three scenarios matches your situation:
  1. Existing remote site and local copy, both in sync.
  2. Existing remote site, no local copy.
  3. New remote site, existing local copy.

4.1 Existing Remote Site And local Copy, Both In Sync

If both the remote site and the local copy exist and are in sync, run
sitecopy --catchup example.com
to make sitecopy think the local site is exactly the same as the remote copy. Replace example.com with the name of the site you use in the .sitecopyrc file.
falko@falko-desktop:~$ sitecopy --catchup example.com
sitecopy: Catching up site `example.com' (on example.com in ~/web/)
sitecopy: All the files and and directories are marked as updated remotely.
falko@falko-desktop:~$

4.2 Existing Remote Site, No Local Copy

If you have no local copy of the existing remote web site, run
sitecopy --fetch example.com
first so that sitecopy fetches the list of files from the remote server (replace example.com with the name of the site you use in the .sitecopyrc file):
falko@falko-desktop:~$ sitecopy --fetch example.com
sitecopy: Fetching site `example.com' (on example.com in ~/web/)
File: data/index.html - size 5
File: error/503.html - size 1906
File: error/502.html - size 1881
File: error/500.html - size 1851
File: error/405.html - size 1810
File: error/404.html - size 1806
File: error/403.html - size 1809
File: error/401.html - size 1806
File: error/400.html - size 1792
File: stats/.htaccess - size 116
File: robots.txt - size 24
File: index.html - size 1861
File: favicon.ico - size 7358
File: .htaccess - size 26
Directory: data/
Directory: error/
Directory: stats/
sitecopy: Fetch completed successfully.
falko@falko-desktop:~$
Then run
sitecopy --synch example.com
to update the local site from the remote copy.
falko@falko-desktop:~$ sitecopy --synch example.com
sitecopy: Synchronizing site `example.com' (on example.com in ~/web/)
Creating data/: done.
Creating error/: done.
Creating stats/: done.
Downloading data/index.html: [.] done.
Downloading error/503.html: [.] done.
Downloading error/502.html: [.] done.
Downloading error/500.html: [.] done.
Downloading error/405.html: [.] done.
Downloading error/404.html: [.] done.
Downloading error/403.html: [.] done.
Downloading error/401.html: [.] done.
Downloading error/400.html: [.] done.
Downloading stats/.htaccess: [.] done.
Downloading robots.txt: [.] done.
Downloading index.html: [.] done.
Downloading favicon.ico: [.] done.
Downloading .htaccess: [.] done.
sitecopy: Synchronize completed successfully.
falko@falko-desktop:~$

4.3 New Remote Site, Existing Local Copy

If the local copy exists, but you have an empty remote site, run
sitecopy --init example.com
first to intialize the site. Replace example.com with the name of the site you use in the .sitecopyrc file.
falko@falko-desktop:~$ sitecopy --init example.com
sitecopy: Initializing site `example.com' (on example.com in ~/web/)
sitecopy: All the files and directories are marked as NOT updated remotely.
falko@falko-desktop:~$
Then run
sitecopy --update example.com
to upload the local copy to the remote site:
falko@falko-desktop:~$ sitecopy --update example.com
sitecopy: Updating site `example.com' (on example.com in ~/web/)
Creating stats/: done.
Creating data/: done.
Creating error/: done.
Uploading stats/.htaccess: [.] done.
Uploading data/index.html: [.] done.
Uploading error/403.html: [.] done.
Uploading error/401.html: [.] done.
Uploading error/404.html: [.] done.
Uploading error/503.html: [.] done.
Uploading error/400.html: [.] done.
Uploading error/502.html: [.] done.
Uploading error/405.html: [.] done.
Uploading error/500.html: [.] done.
Uploading index.html: [.] done.
Uploading robots.txt: [.] done.
Uploading .htaccess: [.] done.
Uploading favicon.ico: [.] done.
sitecopy: Update completed successfully.
falko@falko-desktop:~$

5 Using sitecopy

Afterwards, sitecopy usage is really easy. You can work with your local copy and update, create, and delete files. A first, but optional step is to run
sitecopy example.com
to find out which files have changed locally (replace example.com with the name of the site you use in the .sitecopyrc file):
falko@falko-desktop:~$ sitecopy example.com
sitecopy: Showing changes to site `example.com' (on example.com in ~/web/)
* These items have been added since the last update:
info.php
sitecopy: The remote site needs updating (1 item to update).
falko@falko-desktop:~$
To synchronize your remote web site with your local copy (i.e. upload new and changed files to the remote server and delete files on the remote server that have been deleted locally), you simply run
sitecopy --update example.com
falko@falko-desktop:~$ sitecopy --update example.com
sitecopy: Updating site `example.com' (on example.com in ~/web/)
Uploading info.php: [.] done.
sitecopy: Update completed successfully.
falko@falko-desktop:~$
That's it! Have fun with sitecopy!

6 Links


Baca Selengkapnya ....

Physical Memory Analysis with the LiME Linux Memory Extractor

Posted by Unknown Minggu, 22 April 2012 0 komentar

  The LiME Loadable Kernel Module allows digital investigators to perform physical memory analysis on Linux and Linux-based devices such as Android smartphones. LiME could capture currently running and previously terminated apps, for example, and the IP addresses of other devices to which it has connected. In this Linux.com interview, Joe Sylve, a Senior Security Researcher at Digital Forensics Solutions, explains what LiME is and how it works.
Linux.com: What is LiME and what's the background behind its release?
Joe Sylve: LiME (or Linux Memory Extractor) is a tool that allows the capture of volatile memory (RAM) from a running Linux device. It is the first tool of its type that also supports memory capture from Android devices. Forensics memory analysis is vital to investigations as volatile memory contains a wealth of information that is otherwise unrecoverable. Lack of such information can make certain investigative scenarios impossible, such as when performing incident response or analyzing advanced malware that does not interact with non-volatile storage.
In 2011, I was doing some research on the feasibility of using Android devices to access classified information in a forensically secure manner. The Department of Defense currently does not allow employees to access sensitive data from their mobile devices for fear that if the devices were lost or stolen sensitive data could be recovered from them. The first phase of this research was to perform a detailed forensic analysis of selected mobile devices to determine what data is stored on the device by common use cases. This included data that could be recovered from the device's RAM using "live" analysis.
The standard methodology for obtaining a capture of a device's RAM has been to use a tool such as Ivan Kolar's fmem. Attempts to port fmem to Android failed, because of several technical limitations, so that's why I developed LiME (then known as DMD). After testing it, we found that LiME actually worked better than fmem at creating a forensically sound capture on Linux devices.
Linux.com: LiME is intended to be used to capture evidence that can be relevant in criminal and civil investigations, but what prevents anyone from using LiME to invade someone's privacy?
Joe Sylve: By its very nature, computer forensics research is a double-edged sword. Any tool that can be useful for forensics in a criminal investigation has the potential to impact a user's privacy when abused; however, in order to use LiME, an investigator needs to have physical access to the device and the tool needs to be custom compiled to work for the specific running kernel on the device, so the chances that the tool could be used to invade someone's privacy without their knowledge are limited.
Linux.com: What's next for the project? Any additional features or fixes in the works?
Joe Sylve: LiME is a Loadable Kernel Module, which means for it to work it has to be specifically compiled to work on the kernel version that the device is running. It would be nice if there was a community effort to help compile LiME against as many kernel versions as possible, so that investigators and researchers could have access to a library of pre-compiled modules for the kernel versions running on the most commonly used devices.
Linux.com: Is there anything else you'd like to add?
Joe Sylve: LiME is available for download from our website. For any of your readers who are interested in the technical details of LiME, we have published a paper, Acquisition and Analysis of Volatile Memory from Android Devices, in Digital Investigation. A copy of that paper is also available on our website.

Baca Selengkapnya ....

Using mod_spdy With Apache2 On Debian Squeeze

Posted by Unknown 0 komentar

 SPDY (pronounced "SPeeDY") is a new networking protocol whose goal is to speed up the web. It is Google's alternative to the HTTP protocol and a candidate for HTTP/2.0. SPDY augments HTTP with several speed-related features such as stream multiplexing and header compression. To use SPDY, you need a web server and a browser (like Google Chrome and upcoming versions of Firefox) that both support SPDY. mod_spdy is an open-source Apache module that adds support for the SPDY protocol to the Apache HTTPD server. This tutorial explains how to use mod_spdy with Apache2 on Debian Squeeze.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

SPDY runs over HTTPS, so we need an HTTPS-enabled web site to test SPDY. Please note that SPDY will fall back to HTTPS if the user's browser does not support SPDY or if things go wrong, so installing mod_spdy doesn't hurt your existing setup.
I'm assuming that you have a working LAMP setup, as described on Installing Apache2 With PHP5 And MySQL Support On Debian Squeeze (LAMP).
For testing purposes I wil lsimply enable the default SSL web site that comes with Debian's Apache package (you don't need to do this if you already have an SSL web site on your server).
To enable SSL, just run:
a2enmod ssl
To enable the default SSL web site, run:
a2ensite default-ssl
Restart Apache afterwards:
/etc/init.d/apache2 restart
Go to the default SSL web site's URL (e.g. https://www.example.com) and test if it works (I'm using the default self-signed certificate here, that's why I have a certificate warning, but this has no effect on using SPDY):



2 Installing mod_spdy

Google provides Debian packages for mod_spdy on https://developers.google.com/speed/spdy/mod_spdy/. Simply download the correct one for your architecture (32- or 64-bit) to your server...
64-bit:
cd /tmp
wget https://dl-ssl.google.com/dl/linux/direct/mod-spdy-beta_current_amd64.deb
32-bit:
cd /tmp
wget https://dl-ssl.google.com/dl/linux/direct/mod-spdy-beta_current_i386.deb
... and install it as follows:
dpkg -i mod-spdy-*.deb
apt-get -f install
(This will also add the Google mod_spdy repository to the apt sources so that the module will be kept up-to-date:
cat /etc/apt/sources.list.d/mod-spdy.list
### THIS FILE IS AUTOMATICALLY CONFIGURED ###
# You may comment out this entry, but any other modifications may be lost.
deb http://dl.google.com/linux/mod-spdy/deb/ stable main
)
Restart Apache afterwards:
/etc/init.d/apache2 restart
The good thing is, mod_spdy needs no configuration, it works out of the box!
(In fact, there is a configuration file, /etc/apache2/mods-available/spdy.conf, but the default settings should be ok.
cat /etc/apache2/mods-available/spdy.conf

# Turn on mod_spdy. To completely disable mod_spdy, you can set
# this to "off".
SpdyEnabled on

# In order to support concurrent multiplexing of requests over a
# single connection, mod_spdy maintains its own thread pool in
# each Apache child process for processing requests. The default
# size of this thread pool is very conservative; you can override
# it with a larger value (as below) to increase concurrency, at
# the possible cost of increased memory usage.
#
#SpdyMaxThreadsPerProcess 30

# Memory usage can also be affected by the maximum number of
# simultaneously open SPDY streams permitted for each client
# connection. Ideally, this limit should be set as high as
# possible, but you can tweak it as necessary to limit memory
# consumption.
#
#SpdyMaxStreamsPerConnection 100
You can learn more about the configuration options on https://developers.google.com/speed/spdy/mod_spdy/install.
)

3 Testing

Now let's test if SPDY is working. We need a browser with SPDY support. e.g. Google Chrome. Open Chrome and reload your SSL web site (e.g. https://www.example.com) - it is important that you reload it so that it can use SPDY (the first time you loaded it in chapter 1 it used normal HTTPS). Afterwards, open a new tab and type in the URL
chrome://net-internals/#spdy
If everything went well, your SSL vhost should now be listed in the table which means SPDY support is working.


(Because of SPDY's fallback mechanism to HTTPS, your SSL vhost will still work in any other browser that does not support SPDY.)

4 Links


Baca Selengkapnya ....

iptables: Small manual and tutorial with some examples and tips

Posted by Unknown 0 komentar

This is a small manual of iptables, I’ll show some basic commands, you may need to know to keep your computer secure.

Basic commands

List rules
iptables -L
This is going, list the default table “Filter”.
Edit: You may prefer to use iptables -L -vn to get more information, and to see ports as numbers instead of its names.
List rules in specific table
iptables -L -t nat
You can also list the other tables like: mangle, raw and security. You should consider reading a bit more about tables. You can do it in the Tables section in the man page of iptables
Delete all rules
iptables -F
Delete specific table liket nat
iptables -t nat -F
Specify chain policies
iptables let’s you configure default policies for chains in the filter table, where INPUT, FORWARD and OUTPUT, are the main ones (or at least the most used). Users can even define new chains.
These aforementioned chains, are better explained in this graph that comes from Wikipedia.
iptables chains You can see the original image here
iptables -P INPUT DROP
iptables -P FORWARD ACCEPT
iptables -P OUTPUT DROP
You can define the default policy as ACCEPT and then deny specific traffic, or define default policies as DROP and then open specific traffic to and/or from your box. The last one is more secure, but require more job.
Block IP traffic from an specific IP or Network.
Block from an IP
iptables -A INPUT -s 11.22.33.44 -j DROP
If you want to block only on an specific NIC
iptables -A INPUT -s 11.22.33.44 -i eth0 -j DROP
Or an specific port
iptables -A INPUT -s 11.22.33.44 -p tcp -dport 22 -j DROP
Using a Network and not only one IP
iptables -A INPUT -s 11.22.33.0/24 -j DROP
Block traffic from a specific MAC address
Suppose you want to bloc traffic some a MAC address instead of an IP address. This is handy if a DHCP server is changing the IP of the maching you want to protect from.
iptables -A INPUT -m mac --mac-source 00:11:2f:8f:f8:f8 -j DROP
Block a specific port
If all you want is to block a port, iptables can still do it.
And you can block incoming or outgoing traffic.
Block incoming traffic to a port
Suppose we need to block port 21 for incoming traffic:
iptables -A INPUT -p tcp --destination-port 21 -j DROP
But if you have two-NIC server, with one NIC facing the Internet and the other facing your local private Network, and you only one to block FTP access from outside world.
iptables -A INPUT -p tcp -i eth1 -p tcp --destination-port 21 -j DROP
In this case I’m assuming eth1 is the one facing the Internet.
You can also block a port from a specific IP address:
iptables -A INPUT -p tcp -s 22.33.44.55 --destination-port 21 -j DROP
Or even block access to a port from everywhere but a specific IP range.
iptables -A INPUT p tcp -s ! 22.33.44.0/24 --destination-port 21 -j DROP
Block outgoing traffic to a port
If you want to forbid outgoing traffic to port 25, this is useful, in the case you are running a Linux firewall for your office, and you want to stop virus from sending emails.
iptables -A FORWARD -p tcp --dport 25 -j DROP
I’m using FORWARD, as in this example the server is a firewall, but you can use OUTPUT too, to block also server self traffic.
Log traffic, before taking action
If you want to log the traffic before blocking it, for example, there is a rule in an office, where all employees have been said not to log into a given server, and you want to be sure everybody obeys the rule by blocking access to ssh port. But, at the same time you want to find the one who tried it.
iptables -A INPUT -p tcp --dport 22 -j LOG --log-prefix "dropped access to port 22"
iptables -A INPUT -p tcp --dport 22 -j DROP
You will be able to see which IP tried to access the server, but of course he couldn’t.

Tips and Tricks

Because iptables executes the rules in order, if you want to change something you need to insert the rule in the specific position, or the desired effect is not going to be achieved.
List rules with numbers
iptables -nL --line-numbers
This is going to list all your rules with numbers preceding the rules. Determine where you want the inserted rule and write:
List specific chains
iptables -nL INPUT
Will list all INPUT rules.
iptables -nL FORWARD
Will list all OUTPUT rules
Insert rules
iptables -I INPUT 3 -s 10.0.0.0/8 -j ACCEPT
That is going to add a rule in position 3 of the “array”
Delete rules
iptables -D INPUT 3
That is going to remove the rule inserted above. You can also remove it, by matching it.
iptables -D INPUT -s 10.0.0.0/8 -j ACCEPT
Delete flush all rules and chains
This steps are very handy if you want to start with a completely empty and default tables:
iptables --flush
iptables --table nat --flush
iptables --table mangle --flush
iptables --delete-chain
iptables --table nat --delete-chain
iptables --table mangle --delete-chain
NOTE: do not execute this rules if you are connected via ssh or something similar, you may get locked out

Simple scripts for specific needs

How to stop brute force attacks
You can also use iptables to stop brute force attacks to your server, for example: Allow only three attempts to log through ssh before banning the IP for 15 minutes, this should let legitimate users to log to the servers, but bots will not be able. Remember to always use strong passwords
iptables -F
iptables -A INPUT -i lo -p all -j ACCEPT
iptables -A OUTPUT -o lo -p all -j ACCEPT
iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport ssh -j ACCEPT
iptables -A INPUT -p tcp --dport www -j ACCEPT
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --update --seconds 900 --hitcount 3 -j DROP
iptables -P INPUT DROP
How to NAT with iptables
iptables is also very useful to configure NAT routers, a Linux mashing can act as a router, and share its public IP with a private networks behind it. It is also useful to configure the DHCP in the same server.
To configure a NAT router, you will be better with a server with two NICs, let’s suppose you have:
  • eth0: 12.13.14.15
  • eth1: 10.1.1.1
Now configure NAT to forward all traffic from 10.1.1.0 network through eth0 IP. You may want to empty all tables and start with a fresh chains and tables (see how above).
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth1 -j ACCEPT
That is it, you only have to enable kernel forwarding now:
echo 1 > /proc/sys/net/ipv4/ip_forward

Baca Selengkapnya ....

How To Simulate Network Devices Using SNMP Simulator

Posted by Unknown 0 komentar

 This tutorial explains how you can simulate network devices for testing purposes with the free Verax SNMP Simulator. Verax SNMP agent simulator is a tool that can simulate multiple SNMPv1/v2c agents on a single host on standard 161 port through multi-netting. It allows IT personnel to build virtual, simulated networks of devices without purchasing any additional hardware, for instance for testing purposes. Individual simulated agent responses can be initially retrieved from existing devices and modified at runtime by user defined rules.
 

Requirements And Tools Used

The Verax SNMP Agent Simulator can be installed on 32 and 64 bit Linux distributions including: SuSE, RedHat Enterprise and Debian using i386 and x64 architectures. It can be also installed in any operating environment supporting Java 1.6 or higher (AS/400, FreeBSD and others).
Before the installation you should check:
  • RAM: at least 128 MB (depending on the number of SNMP agents).
  • Disk space: at least 100 MB (depending on the number of SNMP agents).
  • TCP/IP network connection.

Tools Used (Free):


Installation

The installation process consists of the following steps:
  1. Download and unzip vxsnmpsimulator-1.0.1.zip file.
  2. Unzip and copy package content to installation directory e.g.: /usr/local/vxsnmpsimulator
  3. Move simulator.conf file to: /etc/verax.d/ (create the /etc directory if it does not exist)
  4. Open simulator.conf, find a line with SIMULATOR_HOME variable and change the variable to point to the installation directory as required, e.g.: SIMULATOR_HOME=”/usr/local/vxsnmpsimulator”
  5. If running on Linux, copy simulatord file to /etc/init.d directory.
  6. If running on Linux, give execute permission to the file:
    chmod +x /etc/init.d/simulatord
  7. Make sure that java is in the PATH environment variable (the simulator scripts invoke java without any path prefix).
At this stage simulator is ready to run, but it is recommended to edit device.conf.xml file first. Otherwise, the default configuration will be used.

Managing Simulator Service

  1. Starting the Verax SNMP Simulator: Issue the following command in the terminal window shell:
    service simulatord start
    On Linux, the simulation process runs as a background daemon and can be managed as any other service (e.g. can be configured to be started on system startup). On Windows it runs as a foreground process started by the simulator.bat batch file.

    NOTE: Once simulator is started log file will be created. Log file will be located in instalation folder of simulator.
  2. Stopping the Verax SNMP Simulator: Issue the following command in the terminal window shell:
    service simulatord stop
  3. Opening the simulator management console: Issue the following command in the terminal window shell:
    service simulatord console
1


Working With Simulator Management Console

  1. Connecting to the simulator service: Once the management console has been opened, it asks for the connection details (the console may connect to multiple servers). By default, the simulator service process is running on the same server as the management console – in such a case confirm the default parameters by pressing “y” at the prompt:

    Read default connection parameters? [y/n]

    The default connection parameters are: 127.0.0.1:43500 (localhost as the host name and 43500 for TCP port).

    Once connected, use HELP command to see available options.
  2. Management Console commands: Management Console provides two levels of management:

    Level 1 – for management of device types supported by the simulator (add and remove device type, start and stop devices). Device type is considered as a group of devices using the same SNMP record file.

    Level 2 – for management of devices (agent instances) under current device type (start, stop, add, remove devices).

    A specific set of commands is available for each level. In order to see all available commands for the current level, use HELP command.

Managing Virtual Interfaces

The simulator requires virtual interfaces to run simulated devices. Each simulated device has a separate IP address assigned to a separate virtual interface. Virtual interfaces must be configured before starting the simulator. Currently Verax SNMP Simulator supports automatic interface management for Linux only.
Issue the following command in the terminal window shell:
service simulatord console

SNMP Record Files

Each simulated network device is represented by set of SNMP objects which are exposed by the simulator and can be read by external applications (e.g. by network management system). SNMP objects are kept in files called SNMP record files. Each SNMP record file contains SNMP objects representing a single device type (e.g. Cisco switch).
SNMP record file is a plain text file in which one line represents one SNMP object. Single line in this file has the following format:
OID = TYPE: VALUE [MODIFIER]
Where:
OID – numerical identifier of SNMP objects e.g. “.1.3.6.1.2.1.2.1.0”,
TYPE – type of object defined by SMI (for data types see the table below),
VALUE – value of the object,
MODIFIER – optional modifier of object value





Additional Info (SNMP Modifier Types)

If more than one device is simulated based on the same SNMP record file, each device will expose the same SNMP object values. To differentiate object values, separate SNMP record files with different values can be created (which often requires a lot of manual work) or modifiers can be applied. Using modifiers requires the user to familiarize himself with the modifier syntax, however it speeds up the process of defining simulated devices especially for large networks. Modifier is an optional element in object definition in SNMP record file that follows object value and modifies it.
There are two types of modifiers:
Pre-loaded modifier – object value is modified upon simulator start when SNMP record files have been loaded. This modifier generates constant value of object which will be returned on every object read operation.
Post-loaded modifier – object value is modified on every object read operation. The value returned will be different each time it was read. This modifier can be used to simulate performance counters or other objects representing constantly changing metrics.

Baca Selengkapnya ....

Implement strong WiFi encryption the easy way with hostapd

Posted by Unknown Kamis, 19 April 2012 0 komentar
Summary:  Keep wireless security simple. hostapd, the Host Access Point daemon provides solid WiFi encryption that meets enterprise standards without all the overhead of running FreeRADIUS. Learn more about this tool and how to incorporate it into your environment.


Introduction
hostapd, the Host Access Point daemon, provides strong WPA2 encryption and authentication on Linux-based wireless access points. It is fairly simple to configure, supports WPA2-Personal and Enterprise, and also provides a unique modification to WPA2-Personal that makes it both strong and simple to administer.
The gap between WPA2-Personal and WPA2-Enterprise is rather large. WPA2-Enterprise is the strongest wireless security, but it is complex to administer because it requires a public key infrastructure (PKI) with server and client certificates, and a certificate authority. Most shops use FreeRADIUS servers to manage all this, which is overkill when you have a small number of access points to manage, or want to set one up for a temporary event or project.
WPA2-Personal is supposed to be both easy and strong for small shops because it uses a single shared key for all users, which is easy to roll out but presents ongoing security and administration hassles. There is no good way to remove a single user, because any time the key is changed the new key has to be distributed to all users, and you can't keep unwanted users out because they only need one friend on the inside to get the new key. This is better suited for a semi-public hotspot; for example, you want to provide free WiFi for visitors to your office, but not to every freeloader in the neighborhood.
hostapd gives you a great middle ground, a way to use WPA2-Personal with individual keys for each user rather than a single shared key for everyone. These keys are just passwords in the hostapd configuration file and on the clients, so a PKI or a separate authentication server is unnecessary.
Remember that traffic is encrypted only between the client and the wireless router, to prevent eavesdropping on the wireless link; it does not provide end-to-end encryption. That is a job for something like OpenVPN or an SSH tunnel.
Prerequisites
You need a wireless access point that either includes hostapd or lets you install it, the iw command, and you'll need wpa_supplicant on a Linux PC for testing. Your access point should support hostapd 0.6.8 or newer. (The current hostapd release is 0.7.3.) With version 0.6.8 hostapd implemented the nl80211 driver. No special drivers are needed for any wireless interface card (WIC) supported by the mac80211 framework in the Linux kernel; it is built-in native support.
The nl80211 driver moves encryption, authentication, key rotation, and other access point functions into userspace. If you typically use the iwconfig command, start using the iw command because iwconfig does not work on 0.6.8.
DD-WRT and OpenWRT are two excellent open source firmware replacements for consumer-level wireless routers like the Linksys WRT54G-type devices (see Resources), and they include hostapd. Both have extensive databases of supported devices. I prefer to create my own WAPs using stripped-down Linuxes on Soekris, PC Engines, MicroTik single-board computers (see Resources). These little boards are durable, and I get complete control and flexibility.
If you like to build your own WAP, the most important component is a WIC with native Linux kernel support, and that supports the all-important AP mode. This is also called Access Point, Master, and Infrastructure mode, and it is required for a wireless access point. Many wireless network interfaces do not support AP mode, but are only client devices stripped down to a minimal functionality. I stick with Atheros wireless interfaces because they are fully-featured, and well-supported with both their legacy Madwifi drivers and the newer mac80211 drivers.
Avoid ndiswrapper on your access point. It's a nice hack for making a WIC work when you have no other options, but it is still a hack that hides a multitude of problems. Stick with good wireless interfaces with native kernel support.
Consult the Linux Wireless.org device databases to find supported interfaces, and also lots of information on wireless drivers and userspace commands (see Resources). The Linux Wireless project has done a great job of cleaning up and harmonizing the Linux wireless stack.
It's easier on the client side as nearly any WiFi-compliant WIC with native Linux kernel support can connect to your access point with strong WPA2 security. Mac and Windows® clients can also use your nice Linux-based access point.
Probing WICs
How do you know what functions your WIC supports? iw tells you. Look for the "Supported interface modes" section to see if it supports AP mode. Listing 1 shows an example.

Listing 1. iw listing
$ iw list

[...]

Supported interface modes:
*IBSS
*managed
*monitor
*AP
*AP/VLAN

This example shows a WIC that supports AP mode and wireless VLANs. IBSS is ad-hoc mode. Monitor mode is for sniffing wireless networks. All WICs support managed mode, which is a client of an access point.
On Atheros interfaces that use the Madwifi drivers, try wlanconfig. See Listing 2.

Listing 2. Example of wlanconfig
# wlanconfig ath0 list caps
ath0=7782e40f

This shows that it supports AP mode, as well as WPA2 and the strong AES-CCMP cipher.
Another good way to probe your wireless hardware is with the extremely useful hwinfo command. It has a special option for wireless interfaces, and produces all kinds of great information as the snippet in Listing 3 shows:

Listing 3. Example of hwinfo data for WIC
$ hwinfo --wlan
27: PCI 500.0: 0282 WLAN controller
Model: "Intel WLAN controller"
Driver: "iwlagn"
Driver Modules: "iwlagn"
WLAN encryption modes: WEP40
WEP104 TKIP CCMP
WLAN authentication modes: open sharedkey wpa-psk wpa-eap
Status: iwlagn is active
Driver Activation Cmd: "modprobe iwlagn"

hwinfo names the driver, tells what encryption it supports, the name of the device, and lots more. You also might try lspci for PCI network interfaces, and lsusb for USB interfaces. This WIC will not work as an access point because the iwlagn driver is not supported by hostapd, and in any case it does not support AP mode. (It is part of a low-budget integrated Centrino chip.)
Configuring hostapd
Installation depends on which Linux distribution you use, so I shall leave it as homework for you to address on your own. First configure hostapd on the access point, and then wpa_supplicant on the client PC for testing the key exchange.
If you do not have a /etc/hostapd.conf file on your access point then create a new one. If your installation provides one, make a backup copy of it for reference and start with a clean new one. The example in Listing 4 has everything you need for our WPA2-Personal setup:

Listing 4. Example /etc/hostapd.conf
interface=ath0
bridge=br0
driver=nl80211
ssid=alracnet
auth_algs=1
wpa=1
wpa_psk_file=/etc/hostapd-psk
wpa_key_mgmt=WPA-PSK
wpa_pairwise=CCMP TKIP
rsn_pairwise=CCMP

You might need to replace some of these parameters, such as the interface, driver, ssid, with your own values. When you list more than one option, separate them with spaces, like on the wpa_pairwise line. Here are notes on this example.
  • Atheros interfaces are always named athx, and all others are wlanx.
  • Leave out the bridge line if your access point does not have an Ethernet bridge.
  • The driver is nl80211 if you're using hostapd 0.6.8 or later and a WIC with mac80211 support. The only supported legacy drivers are HostAP, madwifi, and prism54. Pre-0.6.8 hostapd releases support the hostap, wired, madwifi, test, nl80211, and bsd drivers.
  • ssid is whatever you want your ssid, or access point name, to be.
  • auth_algs=1 allows only WPA2 authentication algorithms. 2 is WEP. Never ever use WEP (wired equivalent privacy) because it has been thoroughly broken for years, and is trivially easy to crack. 3 allows both.
  • wpa=2 allows only WPA2. 1 is WPA1, and 3 allows both.
  • wpa_psk_file points to the file containing the shared keys.
  • wpa_key_mgmt specifies the encryption key algorithms you want to allow. Your choices are WPA-PSK, WPA-EAP, or both. PSK is pre-shared key. EAP is Extensible Authentication Protocol, which is a framework that supports a number of different authentication methods. You do not need it for your little pre-shared key setup.
  • wpa_pairwise and rsn_pairwise control which ciphers are allowed for encrypting your data, and you can use CCMP, TKIP, or both. CCMP is much stronger than TKIP, so you could try allowing only CCMP. Windows clients are notorious for being finicky and troublesome with strong security, so you might have to allow TKIP for them.
Everyone should use WPA2; WEP (Wired Equivalent Privacy) is so weak it's useless, and WPA is almost as weak as WEP. WPA2 support has been mandatory in WiFi certified devices since 2006, and is supported in all modern operating systems, including Windows XP SP3. If you have to replace some WICs it's a lot cheaper than cleaning up after an intrusion.
Next, create a /etc/hostapd-psk file containing a wildcard MAC address and a simple plain-text test password up to 63 characters long:
00:00:00:00:00:00 testpassword 


Now go to your Linux client PC and create a simple configuration file for wpa_supplicant, wpa_supplicant.conf similar to the example in Listing 5.

Listing 5. Sample wpa_supplicant.conf
ctrl_interface=/var/run/supplicant
network={
ssid="alracnet"
psk="testpassword"
priority=5
}

ctrl_interface allows you to interact with wpa_supplicant on the command line. Use your own ssid and test plain-text password, and enclose both in quotation marks. The higher the priority number, the faster the connection is made to the access point. Now go back to your access point and fire up hostapd in debugging mode:
# hostapd -d /etc/hostapd.conf If there are configuration errors, it will report them and not run. Otherwise it will emit many lines of output. Press CTRL+c to stop it. When you've worked the bugs out you can configure it to start automatically, which you'll do presently.
Then on the client, stop your wireless connection if it is running and run wpa_supplicant as root:
# wpa_supplicant -i wlan0 -D wext -c wpa_supplicant.conf -d -i specifies the wireless interface, -D wext is the generic wpa_supplicant driver, -c points to the configuration file, and -d means debug mode. You'll see lots of output on both the access point and the client. When the key exchange is successful it completes quickly, and you'll see messages like the ones in Listing 6 on the client.

Listing 6. Sample messages from wpa_supplicant
EAPOL: SUPP_BE entering state IDLE
EAPOL authentication completed successfully
RTM_NEWLINK: operstate=1 ifi_flags=0x11043 ([UP][RUNNING][LOWER_UP])
RTM_NEWLINK, IFLA_IFNAME: Interface 'wlan0' added

Hurrah, it works. Press CTRL+c to end your wpa_supplicant session. The final step is to create individual user keys. First create these on the access point, and then copy them to your clients using your favorite network configuration utilities. All graphical configurators are pretty much the same: enter the SSID, select WPA2 Personal authentication, and copy in the key.
Adding Users, Stronger Keys
Now it's time to tighten things up a bit and add users. A plain-text password is computationally expensive because it must be encrypted, so wpa_supplicant comes with a nice command for generating 256-bit encrypted keys out of plaintext passwords, wpa_passphrase. Use it as shown in Listing 7, along with the SSID:

Listing 7. Creating users with wpa_passphrase
$ wpa_passphrase "alracnet" "greatbiglongpasswordbecauselongerisbetter"
network={
ssid="alracnet"
#psk="greatbiglongpasswordbecauselongerisbetter"
psk=a8ed05e96eed9df63bdc4edc77b965770d802e5b4389641cda22d0ecbbdcc71c
}

Back in /etc/hostapd-psk you can start to add users. Each encrypted pre-shared key must be matched with the MAC address of the user. Listing 8 shows an example.

Listing 8. Example /etc/hostapd-psk
11:22:33:44:55:66     a8ed05e96eed9df63bdc4edc77b965770d802e5b4389641cda22d0ecbbdcc71c
22:33:44:55:66:77 eac8f79f06e167352c18c266ef56cc26982513dbb25ffa485923b07bed95757a
33:44:55:66:77:aa 550a613348ffe64698438a7e7bc319fc3f1f55f6f3facf43c15e11aaa954caf6
44:55:66:77:aa:bb ad328e5f2b16bdd9b44987793ed7e09e6d7cca3131bc2417d99e48720b4de58c

When you are satisfied that everything is working, you probably want hostapd to run automatically. There are several ways to do this: create a startup script so it starts at boot, or start it when the wireless network interface comes up. There are so many ways to configure this in the various Linux distributions I shall leave this as your homework as well. You'll probably want to use the -B option, which forks it into the background, rather than the -d option for debugging.
That wraps up your introduction to the excellent hostapd daemon. See Resources for more information.

Resources
Learn

Baca Selengkapnya ....
Trik SEO Terbaru support Online Shop Baju Wanita - Original design by Bamz | Copyright of android blackberry.