Ask James Reinders: Multicore vs. Manycore

Posted by Unknown Minggu, 30 September 2012 0 komentar
http://goparallel.sourceforge.net/ask-james-reinders-multicore-vs-manycore


Leading edge insight and explanation from James Reinders, Director, Software Evangelist, Intel Corporation. Conducted by Geeknet Contributing Editors Jeff Cogswell and John Jainschigg.
Go Parallel: You talk about Multicore vs. Manycore.  Are those separate technologies?
James Reinders:  Yeah, they’re not necessarily the most perfectly defined items. I would define them by saying Multicore really started in earnest around 2005, and it’s been an incremental approach to putting on a chip designs that were already in small computers.  We used to have computers with two or four processors in them; now we have them on a single chip. Multicore seems rather incremental.
Manycore represents a little different concept. What  if you’re willing to put a lot more cores on a single device, what changes? Two things change: One, you have this revelation that you’re going to be highly parallel. And so the way you design the hardware also changes  because you start to optimize assuming only parallel programs. The the other thing that changes is the software has to be parallel. I sometimes call these highly-parallel devices. We have the Intel MIC architecture which realizes this concept, and the Intel Xeon coprocessor
It’s a variable argument in computer architecture; there’s no right answer.  Do you want a small number of really powerful processors or a large number of less-powerful ones?
There’s great research that’s gone on in this area for decades, going back to one of the earliest papers, a   thesis by Danny Hillis  who eventually founded Thinking Machines Corporation and built the Connection Machine parallel supercomputer. With that particular machine, I would say one of the lessons in it was that they went too far being simple. Too many things  were simple, and they had to evolve their architecture. They definitely went the direction of adding more capabilities until eventually, like many startups, they failed as a business are largely looked at as creating a lot of brilliant people and technology. 
In any case, it’s an exploration, and to this day we’re still exploring the problem. And there isn’t a right answer. It depends so much on what you’re trying to do and actually having that breadth is very valuable for the industry to have different capabilities to match different needs.
Interviews are edited lightly for brevity and clarity.
 Check out:

Baca Selengkapnya ....

Building a Private Cloud with Open Source Ganeti: Pros and Cons

Posted by Unknown Kamis, 27 September 2012 0 komentar
http://www.linux.com/news/featured-blogs/200-libby-clark/640691-pros-and-cons-of-building-a-private-cloud-with-open-source-ganeti


Businesses considering building a private cloud with open source tools have likely explored at least one of the big three open IaaS options: OpenStack, CloudStack and Eucalyptus. These are great platforms for IT departments that have the time and technical expertise to test and deploy large clouds.
But what works for enterprise customers isn’t necessarily ideal for small to medium-size businesses or academic institutions with less demand for computing resources and a small IT staff, says Lance Albertson, associate director of operations at Oregon State University’s Open Source Lab.

IustinPop
Iustin Pop, lead Ganeti developer at Google.
That’s why the OSU Open Source Lab is running its private production and development cloud on Ganeti, an open source virtualization management platform developed at Google.
Google originally started Ganeti in 2005-2006 as a VMware alternative for managing virtual machines (VMs), storage and networks – not as a cloud platform. So it’s missing many of the features, such as elastic-style computing, cloud-based storage and object APIs that come with the bigger open cloud projects, said Iustin Pop, lead developer for Ganeti at Google, via email.
It can be used for single-host management, similar to Libvirt or Xen Tools, as well as large-scale computing on the OpenStack level, Pop said. But Ganeti’s sweet spot lies somewhere in the middle: “from one to two physical machines to a couple hundred, focused on private use of traditional VMs.”
“If you want to run hundreds of stable Linux-based VMs, Ganeti is a good option,” Pop said. “If you want to provide a public facing web-based interface for dynamic VM creation/turndown, there are better projects out there.”
At Google, Ganeti doesn’t touch any of the user-facing services such as email or Web search. It runs only internal corporate services such as DNS and cache servers used by the engineering workstations.

Pros of Ganeti

OSU’s Albertson is aware of the project’s limitations. But, he says, Ganeti has proven to be the perfect production and development environment to quickly and easily spin up virtual machines for the open source projects housed at OSU’s Open Source Lab, including Busybox, Inkscape, OpenMRS, phpBB, OSGeo, Python Software Foundation and Yum.

Lance Albertson
Lance Albertson, associate director of operations at Oregon State University's Open Source Lab.

The lab’s Ganeti cloud is built for high availability and resiliency. Because it primarily uses local storage for virtual machines instead of a disk image mounted over NFS, the Ganeti cloud can generally perform faster than other cloud environments and at a lower hardware cost, Albertson said.
“If one of the physical nodes goes down we can easily bring the affected virtual machines back online on another node very quickly,” Albertson said. “Other cloud platforms don’t necessarily have hardware failure resiliency built into the platform as elegantly.”
They can also expand a Ganeti cluster with built-in utilities that can easily add a node with minimal downtime and even automatically re-balance the cluster.
“It really boils down to what you need. Other platforms have a lot of other useful features but it generally comes with a lot of angst, in building, testing and deploying more complex systems,” he said. “Ganeti is really simple to set up and maintain and fits well with how we provide virtual machine hosting.”

Cons of Ganeti

Ganeti’s simplicity can also be a challenge. For example, it doesn’t automatically shift resources when a node fails. Someone has to be there, manning the ship.
It’s command line driven so there’s no nice Web interface for users to interact with the system – a problem the Open Source Lab’s Ganeti Web Manager Project aims to fix.
“We’re making a lot of headway to improve the interface for more general users, “ Albertson said. “Right now it’s mostly useful for admins that want to give console access to their own virtual machines.
“We’ve also added quota support and a permission system,” he said, “So we’ve tried to extend Ganeti to be more cloud-like in that sense.”
Ganeti has its own API, but it isn’t compatible with Amazon’s API – or any other APIs.
“It’s really meant to be a private IaaS, keeping things in-house mostly,” Albertson said.
“I can see this being useful for small businesses that want to run a few virtual machines for their systems in a closet somewhere,” he said. “Trying to do that with OpenStack or the others? It’s just too much complexity for that size of scale.”

Baca Selengkapnya ....

Tutorial: Playing around with MPlayer

Posted by Unknown 0 komentar
http://tuts.pinehead.tv/2012/09/25/tutorial-playing-around-with-mplayer


This tutorial handles about the usage of the wonderful media player MPlayer. It explains several options, lists some useful keyboard shortcuts and handles about tips and tricks that can be used to enhance your multimedia experience.
Difficulty: Basic

Note: this tutorial assumes that you have MPlayer installed & working and that you have some basic shell knowledge.

Playing a file

The most simple way of invoking MPlayer to play a media file is this:
[rechosen@localhost ~]$ mplayer
MPlayer will try to auto-detect what kind of file you’re trying to play (it usually succeeds) and play it. If it’s an audio file, it’ll just start playing and show its status and possible warnings on the command-line. If it’s a video file, it’ll open a window to play it in and then start playing.

Seeking through a file

You can seek through a file with a set of 3 keyboard shortcut pairs. Each pair makes MPlayer seek a different amount of time, and the pair consists of a key for seeking backward and a one for seeking forward. Listed below are those key pairs, for seeking backward and forward respectively:
  • Left arrow and Right arrow (10 seconds)
  • Down arrow and Up arrow (1 minute)
  • Page down and Page up (10 minutes)
Knowing these will come in handy a lot of times.

Playing a DVD

MPlayer does not have DVD menu support (sadly), but it does support playing DVD’s. You can play a DVD this way:
mplayer dvd://
Replace with a number, like 1, 2 or 3. I personally prefer xine for DVD playback, as xine does support DVD menus.

Playing with subtitles

You can play a movie with subtitles in multiple ways. When playing a movie file, you can specify a subtitle file this way:
[rechosen@localhost ~]$ mplayer -sub
When playing a DVD movie, you can also use the DVD’s subtitle by specifying a language code like this:
[rechosen@localhost ~]$ mplayer dvd:// -slang nl,en
The above command would try to use dutch subtitles first, and fall back on english ones if dutch subtitles weren’t available.

Useful keyboard shortcuts

A list of useful keyboard shortcuts (sometimes called hotkeys) in MPlayer:
(note that the full list can be found in MPlayer’s man page)
  • “f” => Toggle between full-screen and windowed mode during video playback (you can set the option -fs on the command line to make MPlayer start playing in full-screen mode immediately)
  • “o” => Switch OSD (OnScreen Display) mode during video playback (for viewing how much time the movie has been playing and what its total lenght is)
  • “p” or Space => Pause / resume playback
  • “q” or Esc => Quit MPlayer (Esc does not quit but only stops playback when in GUI mode)
  • “/” and “*” (or “9″ and “0″) => Decrease / increase playback volume respectively
  • “m” => Mute sound (toggle)
  • “T” (usually Shift + “t”) => Toggle stay-on-top (very useful if you don’t want your video window to be overlapped by an other application)
  • “b” and “j” => Cycle through available subtitles
  • “x” and “z” => Adjust subtitle delay (useful if you have a subtitle that isn’t 100% synced; you can then correct the time difference on the fly)
  • “I” (usually Shift + “i”) => Show the filename of the movie being played (useful if you want to know that without interrupting the movie)
  • “1″ and “2″ => Adjust contrast*
  • “3″ and “4″ => Adjust brightness*
  • “5″ and “6″ => Adjust hue*
  • “7″ and “8″ => Adjust saturation*
*: These do not always work; see the MPlayer man page.

Generating an index

Sometimes, video files (mainly AVI files) have a corrupted index, or no index at all. This frequently is the case with incorrectly or incompletely downloaded files. Fortunately, MPlayer can generate the index it needs to play the file correctly. By using the -idx option, you can tell MPlayer to generate an index when necessary:
[rechosen@localhost ~]$ mplayer -idx
Sometimes the file does contain an index, but a corrupted one. In those cases, you might need to force MPlayer to generate an index:
[rechosen@localhost ~]$ mplayer -forceidx
Generating an index can take some time, depending on the size of the video file, but after that, the file should play correctly.

Correcting bad audio/video sync

Some videos (mainly flv files) are encoded in a horrible way, and MPlayer will have enormous trouble with the A/V (Audio/Video) sync. There are pretty much two possibilities in this case:
  • MPlayer is trying to fix it but the sync is worsening too fast
  • MPlayer is trying to fix something that’s already right and therefore pushes the sync away unnecessarily
In the first case, you should allow MPlayer to try harder to fix the sync:
[rechosen@localhost ~]$ mplayer -autosync 30 -mc 2.0
In the second case, you shouldn’t allow MPlayer to fix anything when it comes to the sync:
[rechosen@localhost ~]$ mplayer -autosync 0 -mc 0
You might wonder what those options mean. Well, setting autosync to a positive value allows MPlayer to gradually adapt its A/V correction algorithm. The higher the value, the faster MPlayer will try to correct it. The mc option specifies how many seconds MPlayer may correct every frame. Setting it to a high value (like 2.0) practically allows MPlayer to do whatever it thinks it should to correct the A/V sync. Setting it to 0 stops MPlayer from trying anything when it comes to syncing.

Using MPlayer on slow systems

As video playback is a CPU-intensive task, older and slower systems may have a hard time to play certain video files. MPlayer has a feature that will help them to keep up the playback with less CPU power: -framedrop. This will allow MPlayer not to render a frame here and there if the CPU can’t handle it. On systems that are far too slow, it won’t be a pleasure to “watch” the movie (the majority of the frames will just not be rendered at all), but on systems that are a bit faster, this will stop the playback from having hiccups here and there. You can use the -framedrop option like this:
[rechosen@localhost ~]$ mplayer -framedrop
Also, when trying to play MP3 or OGG Vorbis files, you might (on really slow systems) experience buffer underruns, spoiling your music experience. In that case, try using the libmad (in the case of an MP3) or the Tremor (in case of an OGG Vorbis) audio codec. You can detect whether you have a one or not like this:
(In case of MP3)
[rechosen@localhost ~]$ mplayer -ac help | grep mad
If the above command returns a line like this:
mad libmad working libMAD MPEG layer 1-2-3 [libmad]
Then you can play an MP3 file with libmad, which uses a lot less CPU power. To do so, invoke MPlayer like this:
[rechosen@localhost ~]$ mplayer -ac mad
In OGG’s case, you can use the same trick to look if you have a tremor audio codec available:
[rechosen@localhost ~]$ mplayer -ac help | grep tremor
Sadly, I don’t have an example of what it should look like. If you seem to have a working tremor decoder, please leave a comment here so I can add it.

Playing streams from the internet

Many web radio stations make you download a playlist with different ip’s and ports if you want to listen to them. MPlayer is perfectly able to play a web station stream, but the playlist is not a stream, nor a media file. If MPlayer doesn’t autodetect that it’s looking at a playlist and not at a direct stream or media file, you can try using the -playlist option:
[rechosen@localhost ~]$ mplayer -playlist
And if the server has hiccups and causes a lot of buffer underruns (or if you have a bad connection), you can set a bigger cache size:
[rechosen@localhost ~]$ mplayer -cache 8192 -playlist
The cache size is specified in kilobytes; the above will make MPlayer use a cache of 8 mb. Note that MPlayer doesn’t fill the whole cache before it starts playing, it only fills about 4 percent (after that it’ll try to keep filling the cache during playback). You can alter that percentage with the -cache-min option:
[rechosen@localhost ~]$ mplayer -cache 8192 -cache-min 50 -playlist
You can seek in a cache, but do not expect too much of it =).

Looping playback

If you want the media file you’re playing to loop a certain amount of times (or infinitely), you can specify the -loop option, like this:
[rechosen@localhost ~]$ mplayer -loop 3
The above command will play three times and then exit.
[rechosen@localhost ~]$ mplayer -loop 0
The above command will repeat playing forever, unless it is interrupted (for example by quitting MPlayer with the “q” keyboard shortcut). Infinite playback can be useful if you, for example, want a (promotion) movie to play all day on an exhibition.

Altering the playback speed

This may not be that useful, but it can be good for a laugh =). You can make MPlayer play a media file at a different speed with the -speed option. The value 1.0 means normal speed, 0.5 means twice as slow, 2.0 means twice as fast and so on. Specify the option like this:
[rechosen@localhost ~]$ mplayer -speed 2.0

Altering the sample rate

You might want to alter the output sample rate sometimes (certain audio cards, for example, do not support other samplerates than, say, 48000 Hz). This is done with the -srate option, like this:
[rechosen@localhost ~]$ mplayer -srate 48000
This can also be useful when exporting audio to a file (see next chapter).

Exporting the audio to a wav file

You can export the audio of a video file to a wav file this way (note that you can also use this to convert an audio file to a wav file):
[rechosen@localhost ~]$ mplayer -ao pcm
This will export the audio to the file audiodump.wav. You can also specify a filename for the exported wav:
[rechosen@localhost ~]$ mplayer -ao pcm:file=.wav

Watching a movie in ASCII

Another pretty useless but funny feature. There are two libraries that provide support for this: aa and caca. With libaa, you can only watch a movie in black & white ASCII, while libcaca supports colors. However, libaa is more widely supported. You can watch a movie with libaa this way:
[rechosen@localhost ~]$ mplayer -vo aa
And, if you want to (and can) use libcaca:
[rechosen@localhost ~]$ mplayer -vo caca

Exporting a movie to a lot of pictures

MPlayer can also export a movie to a load of images. For example:
[rechosen@localhost ~]$ mplayer -vo jpeg
Warning: the above command will output a huge amount of jpeg files. I strongly recommend to do this in a freshly made, empty directory created for this purpose.
The filenames of the jpeg file it will export will look like this:
  • 00000001.jpg
  • 00000002.jpg
  • 00000003.jpg
  • And so on…
You can export to some other formats. Just replace jpeg in the command above with ppm, png or tga. Note that all these image format have their own options, too. Look for them in MPlayer’s man page.

Specifying an aspect ratio

When playing video files on, for example, a wide laptop screen, you’ll probably want to benefit from that wideness by watching a movie in a 16:9 aspect ratio. You can do that this way:
[rechosen@localhost ~]$ mplayer -aspect 16:9
Of course, you can also specify 4:3 as the ratio to force MPlayer to show the movie in non widescreen format.

Putting options in your MPlayer config file

MPlayer has a nice way of storing options so they will be automatically set every time you invoke the MPlayer command. This can be useful if your system, for example, always needs the audio outputted with a different samplerate. However, the syntax of the config file is a little different. If you’d type -srate 48000 on the command-line, this should be specified in the config file as srate=48000. More complex options, like the -ao pcm:file=.wav, should be put between quotes in a way like this: ao=”pcm:file=.wav”. The config file is located at ~/.mplayer/config for a user, and a global configuration file is located at /etc/mplayer/config. The different values are separated by newlines, like this:
# MPlayer config file
srate=48000
ao=”pcm:file=dumpedaudio.wav”

Final words

Although this tutorial lists and explains quite a bunch of MPlayer features, this wonderful program features a lot more! Have a look at its man page or at the documentation on the MPlayer website. Anyway, I hope this tutorial helped you. Please help promoting this website a little, to let other people benefit from its growing knowledge. Thank you for reading, and happy MPlaying!

Baca Selengkapnya ....

Calculating the Cost of Full Disk Encryption

Posted by Unknown 0 komentar
http://www.networkcomputing.com/security/calculating-the-cost-of-full-disk-encryp/240006508


Is full disk encryption (FDE) worth it? A recent study conducted by the Ponemon Institute shows that the expected benefits of FDE exceed cost by a factor ranging from four to 20, based on a reduction in the probability that data will be compromised as the result of the loss or theft of a digital device.
The report, "The TCO for Full Disk Encryption," was conducted independently by Ponemon and sponsored by WinMagic. The stated purpose of the study was to learn how organizations are deploying software and hardware FDE systems, as well as to determine the total cost of ownership of such deployments across different industries.

MORE INSIGHTS

Webcasts

More >>

White Papers

More >>

Reports

More >>
"Encryption is important to mitigating the damage caused by data breaches, complying with privacy and data protection regulations, and preserving brand and reputation," states the report. "In order to make rational decisions regarding the optimum use of encryption, it is important to comprehend the total cost of ownership (TCO). This particularly applies to solutions believed to be free but may have significantly higher TCO than commercial products."
Ponemon surveyed 1,335 people in IT and IT security in the U.S., the U.K., Germany and Japan. Respondents had an average of 10 years of relevant experience.
The study measured costs in 11 segments: licensing, maintenance, incremental costs, device pre-provisioning, device staging, tech time spent on password resets, end-user downtime spent during password resets, cost associated with re-imaging hard drives, end-user downtime associated with initial disk encryption, end-user time spent operating an FDE-enabled computer, and the value of tech time incurred for various administrative tasks related to encrypted drives. The TCO was the sum of each of these costs per computer for one full year.
While the study found that the benefits of full disk encryption generally exceed the cost in all four of the countries studied, TCO varied by organizational size and industry. In terms of company size, the TCO is highest for organizations with fewer than 50 employees ($403) and for companies with more than 25,000 employees ($315). Highly regulated industries such as finance and healthcare saw the highest TCO ($388 and $366, respectively), while less regulated industries saw lower TCO. For example, the TCO in entertainment and media was $201.
The study found that the most expensive element of FDE is not the hardware or software involved, but the value of user time it takes to start up, shut down and hibernate computing systems while using FDE. Also adding to the cost is the time it takes technicians to complete full disk encryption procedures. These costs must be taken into consideration, the report suggests, when considering the use of free FDE systems and those included with operating systems as opposed to commercial products.
To gauge the cost benefit of FDE, Ponemon looked at the average number of laptop or desktop computers stolen in the four countries studied, as well as the average number of records potentially at risk on lost or stolen devices.
After doing all of the math, Ponemon found that the cost of FDE on laptop and desktop computers in the U.S. per year was $235, while the cost savings from reduced data breach exposure was $4,650.
The research also revealed the reasons organizations choose to encrypt laptop and desktop computers in the first place. Across all four countries studied, and with respondents naming their top two reasons why data is encrypted on systems in their organizations, compliance with self-regulatory programs (32%) and national data protection laws (30%) came out on top. Following were:
• 25%: Minimizing exposure resulting from lost computers
• 23%: Avoiding harm to customers resulting from data loss
• 20%: Improving security posture
• 18%: Minimizing the cost of a data breach
• 17%: Complying with vendor/business partner agreements
• 10%: Minimizing the effect of cyberattacks

MORE INSIGHTS

Webcasts

More >>

White Papers

More >>

Reports

More >>
Whatever the cost or cost benefit, and whether free or commercial products are used, the Electronic Frontier Foundation is encouraging theuse of FDE for protecting data on mobile devices. "Full disk encryption uses mathematical techniques to scramble data so it is unintelligible without the right key," said the nonprofit advocacy group. "Without encryption, forensic software can easily be used to bypass an account password and read all the files on your computer. Fortunately, modern computer systems come with comparatively easy full-disk encryption tools that let you encrypt the contents of your hard drive with a passphrase that will be required when you start your computer. Using these tools is the most fundamental security precaution for computer users who have confidential information on their hard drives and are concerned about losing control over their computers."
Likewise, Aberdeen IT security research fellow Derek Brink recommended that organizations "do something." In the report "Endpoint Security: Hardware Roots of Trust," which examines the increasing vulnerabilities in software and how hardware can be used to mitigate risk, Brink writes, "Regardless of which approach to data protection is taken, all companies should be doing something to mitigate this risk."
Aberdeen research has shown that between the models of encrypting only specific files or folders and the "brute force" of encrypting everything on the endpoint, the general trend is toward full-disk encryption and, increasingly, self-encrypting drives. SEDs include a circuit built into the disk drive controller chip that encrypts all data automatically.
Brink adds that any type of encryption should be integrated with existing processes, including identity management and helpdesk processes, backup and recovery, patch management and end-user training. "The extent to which endpoint encryption can be made systematic and integral to these types of processes will be the biggest contributor to success, particularly on large-scale rollouts."


Baca Selengkapnya ....

Tor Network Directory Project

Posted by Unknown Selasa, 25 September 2012 0 komentar
http://uscyberlabs.com/blog/2012/09/19/tor-network-directory-project/?goback=%2Egde_1873806_member_168038948


Lately we all heard of Silk Road the underground cyber marketplace were you can buy illegal drugs and guns and people say all the bad guy’s are using the dark web for crime stuff – yeah DuDe:—:. It’s is just the Tor onion network, if you want to visit the onion network just go to torproject.org and download their bundle software and go surfing in the onion network. Since there is no bing, google or yahoo in the onion network, if you want a directory of what’s out in onion land just go to the hidden wiki. “Cleaned Hidden Wiki”-http://3suaolltfj2xjksb.onion/hiddenwiki/index.php/Main_Page.
The wiki is built by one of the founders of the onion netowk the administrator of MyHiddenBlog in – (- “http://utup22qsb6ebeejs.onion/” — ) and volunteers built The “Cleaned Hidden Wiki” .It is one of the few places were you can find some of the hidden services (websites) in Tor, in other words the only websites in Tor that want to be found. You see in the Tor onion network your site is your secret, your site is hidden because there is no google or yahoo to send web crawler out into the onion network. The USCyberlabs Tor Network Directory Project will be the first time that we go out actively and collect all the websites (hidden services) that are hiding in the Tor onion network.

When I started to write about Tor and our new (“The Deep Dark Web”) -book, I was contacted by the FBI about what I was writing about Tor and the hidden services and attack vectors in Tor. They wanted to be gAtO’s bff. I must admit I was intimidated and walked a very careful line with my blog postings and my tweets. Why because the FBI want to fuck with lawful security researchers that come to close to the truth about Tor.
They do not want this mapping of the Tor onion network. Why? The mapping of the Tor onion network will show all sites even the ones that want to stay hidden. Like government sites? Like Spy sites? I mentioned Bots with Tor c&c yeah government stuff. You of course have your corporate presence in the hidden services of Tor what will these Tor website show. Maybe it’s not just the bad guy using Tor, maybe.
There are currently only 9 directory servers in the Tor infrastructure that know all the sites on Tor and getting this list is kind of hard. Tor is design not to give out directory information to just anyone. We also want more than a URL of a live site, we will gather all meta-data so we can understand what these sites are all about. Google’s web crawlers do it every second of the day so we will send out crawlers into the Tor onion network to generate our directory of Tor.
The ToR Directory Scan Project (TDS) 
The uscyberlabs TDS Project- is to scan every address possibility and to generate a directory of every single live hidden service in the ToR-.onion network.
Figuring out the rendezvous for a hidden service is complicated, we attack the problem from the side —>> so the onion URL is 16 digits 2-7 a-z  plus the .onion after the url address. It’s easy to have a simple web crawler program count and a,b,c and generate a sequential-alphabetized URL list. Now due to the ToR network things work slow – old style modem speed that you young kids are not used to. Now we feed a URL wait up to 25-35 seconds then list a positive or no-go. Once we have a live hit list of possible live hidden services then we visit manually. And build a working verified w/login and password list of every hidden service on ToR.
with 100 VM we can scan Tor in weeks with 1000 machines we can scan the Tor network within days.
I tested the unix “curl command” in Tor with sock5 and it’s very good at extracting information from a website. So a simple script in will feed all the machines and they will start the scan. Once finish we take all the results and we will have a directory of every single hidden service in Tor land.
gAtO needs your help!

Baca Selengkapnya ....

Create shared space for your multi-boot system

Posted by Unknown 0 komentar
http://www.linuxuser.co.uk/tutorials/create-shared-space-for-your-multi-boot-system


Many people set up their machines to boot up into multiple Linux distributions. This may be because they are developing software that they wish to test under different distributions, or they might just be interested in trying out new distributions and seeing what they offer. One issue that comes up is that files created while you are in one distribution aren’t easily accessible once you reboot into another one.
In this tutorial, we’ll look at the steps needed to create a shared space to store your files so that you have access to them, regardless of which distribution you boot into. In this way, you will still have a separate home directory in each installation to hold all of your application settings, along with a shared data directory that will contain all of the files that you want to have access to in all of the installations.

Step by Step

Partition Shared Space Fstab
Looking for extra space
Step 1 Where to set up
You will want to create a common data area that will be accessible from all of the installed OSs on your system. This could be on an external drive, like a USB key. However, this means that you would need to make sure that this external media was plugged in any time you wanted access. A better solution would be to create a partition on the internal drive to house this data area.
Step 2 Booting a live CD
Going this route means making changes to the partition table. This will require the disk partitions to be unmounted so that they can be altered. To do this, you will need to boot up from a live CD. A good first choice is SystemRescueCd. This CD will give you all of the tools you could possibly need to resize your current partitions and create a new one to hold your data directory. The easiest way to utilise it is to use UNetbootin to create a bootable USB key. There is also Parted Magic, which is one of the options available in UNetbootin.
Step 3 Resizing a partition
Most people use all of the available drive space when they install their OSs of choice, so this means that you will need to shrink at least one of the existing partitions to make space. With GParted, you can right-click on a partition and select ‘resize’.
Step 4 Resizing tip
When resizing partitions, be sure that you pick the right one. If you have a FAT or VFAT partition, be sure to defrag it first. If you end up resizing a partition on the middle of the disk, you will only be able to use the space up until the next partition.
Step 5 Creating a new partition
Once you have some free space available on your drive, you can go ahead and create a new partition to hold your data directory.
Note that there are two types of partitions: primary and logical. You can only have up to four primary partitions – so if you have more than this, you will need to make your new partition a logical one.
You can simply highlight the unused space section and click the ‘add’ icon in the top-left corner. This will pop up a dialog where you can set the size, whether it is a primary or logical partition, and what file system to format the new partition with.
Step 6 Reboot
Once you have created and formatted your new partition, you will want to write these changes to the disk and reboot back into your primary operating system or distribution.
This will then leave you ready to start using this new disk partition in all of your installed distributions.
Step 7 Creating a mount point
In order to use a disk partition, it has to be mounted to some point in your root file system. Since this will be used for your data in your regular user account, you will probably want to place it somewhere in your home directory. A good spot might be a subdirectory called my_data.
You would create this subdirectory with the following command:
mkdir ~/my_data
Partition Shared Space Fstab
Checking on which partitions are mounted
Step 8 Sudo
The next step is to be sure that the new partition is actually accessible. To do this, you will want to try mounting it manually.
On most Linux distributions, only the root user can do this. But, if your user account is set up as an administrator, you should have access to use the sudo command. This lets you run commands as if you were root.
Sudo will ask for your password in order to verify that it really is you.
Step 9 Manual mounting
Mounting the new partition is achieved using a single command:
sudo mount /dev/sdaX ~/my_data
…where X is the partition number of the new partition you just created.
In most cases, the mount tool is smart enough to see what file system is being used on the partition and use that. In the cases where it can’t, or you want to force it to use something else, you can hand it the ‘-t’ option and explicitly set the file system type.
Step 10 Changing ownership
When you mount a filesystem with sudo, the new subdirectory will end up being owned by root. This is not very useful to you, since you won’t be able to read or write to it. You can fix this by changing the ownership. Simply execute:
sudo chown -R jbernard.jbernard ~/ my_data
Note that you should replace ‘jbernard’ with your own username.
Step 11 Checking your mounts
You can verify your mounted partitions with the mount command. If you execute ‘mount’ with no options, it will present all of the mounted disk partitions and where on the file system these partitions are mounted to. The output for each mounted partition will look like:
/dev/sda3 on /home type ext4 (rw)
where any mount options are listed in brackets at the end.
Step 12 Unmounting
Once you are satisfied with everything, you can go ahead and unmount your data directory. You can either execute:
umount /dev/sda4
or
umount ~/my_data
You may get an error if this mount point is still in active use. For example, if you have a file browser open in the directory. Be sure to close any programs open in the data directory.
Step 13 Finding open files
Sometimes you may not realise what files are being held open, stopping you from unmounting the file system. The program lsof gives you a list of open files. You can execute:
lsof ~/my_data
to get a listing of all of the files in the subdirectory my_data that are being used by a program. This should help you identify and close down any troublesome program, and allow you to successfully unmount the data partition.
Step 14 Automating mounts
On boot, Linux looks at file ‘/etc/fstab’ to decide what partitions are to be mounted and where they are to be mounted to on the file system. This is where you will find the mount options for things like the root file system, or the home directories if they reside in their own partition. You can add a new line here for your data directory so that it too will get automatically mounted at boot time.
Partitioning Shared Space
Adding your own extra line
Step 15 Fstab options
The new line in fstab should look something like:
/dev/sda4 /home/jbernard/my_data
ext3 user,defaults 0 0
where you would enter the partition and directory names appropriate to your setup. The third field is the file system type. The fourth field is a comma-separated list of options. The two you will want to use are ‘user’ and ‘defaults’, which will set some sane defaults and also let the data partition be mounted by regular users.
Step 16 Tip about external drives
All of these instructions can be applied to an external drive, like a USB key. If you do this, you need to be sure to have it plugged in before booting. Otherwise, the system will fail to mount it and hang. You can get around this by adding ‘noauto’ to the fstab listing, but then you are responsible for remembering to plug it in before trying to use it.
Step 17 UUIDs
In step 15, we used the entry:
/dev/sda4
to identify the partition we wanted to mount. This might not work if we also use other removable media, since this could change the partition numbers that Linux uses. In these cases you can use the UUID, which is a unique identification for each device and partition. You can find it by executing
sudo vol_id -uuid /dev/sda4
You can then replace
/dev/sda4
in fstab with the entry
“UUID=xxxxxxxx”
where the Xs are the UUID alphanumeric characters of the device.
Step 18 Next system
Once you have this sorted out for the first distribution, doing the same for the other distributions is much easier. In each of the other distributions, you will need to create the same subdirectory named the same way.
You will also need to add the extra line to the file ‘/etc/fstab’. This will make your new data partition available under each of these installed distributions.
Step 19 Windows?
A lot of people also use Windows in their multi-boot systems. So what do you do? If you stick with more standard file systems, like ext2, then there are drivers and programs available that will allow you to mount this data partition and access your files.
You now have no excuse for not getting your work done, regardless of which OS you happen to be booted into. Remember that the ext3 and ext4 file systems are downwards-compatible to ext2, so they should still be available.

Baca Selengkapnya ....

Serving CGI Scripts With Nginx On Fedora 17

Posted by Unknown 0 komentar
http://www.howtoforge.com/serving-cgi-scripts-with-nginx-on-fedora-17


This tutorial shows how you can serve CGI scripts (Perl scripts) with nginx on Fedora 17. While nginx itself does not serve CGI, there are several ways to work around this. I will outline two solutions: the first is to proxy requests for CGI scripts to Thttpd, a small web server that has CGI support, while the second solution uses a CGI wrapper to serve CGI scripts.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

I'm using the website www.example.com here with the document root /var/www/www.example.com/web/; the vhost configuration is located in /etc/nginx/conf.d/www.example.com.vhost.

2 Using Thttpd

In this chapter I am going to describe how to configure nginx to proxy requests for CGI scripts (extensions .cgi or .pl) to Thttpd. I will configure Thttpd to run on port 8000.
First we install Thttpd. There is a Thttpd package for Fedora 17, but the nginx ThttpdCGI page says that Thttpd should be patched - therefore we download the src.rpm package for Fedora 17, patch it and build a new rpm package from it.
We need to install the tools that are required to build a new rpm package:
yum groupinstall 'Development Tools'
Install yum-utils (the package contains the yumdownloader tool which allows us to download a src.rpm):
yum install yum-utils
Next we download the Thttpd src.rpm package for Fedora 17:
cd /usr/src
yumdownloader --source thttpd
ls -l
[root@server1 src]# ls -l
total 164
drwxr-xr-x. 2 root root   4096 Feb  3  2012 debug
drwxr-xr-x. 3 root root   4096 Jun  4 18:21 kernels
-rw-r--r--  1 root root 155690 Mar 28 03:21 thttpd-2.25b-27.fc17.src.rpm
[root@server1 src]#
rpm -ivh thttpd-2.25b-27.fc17.src.rpm
You can ignore the following warnings:
[root@server1 src]# rpm -ivh thttpd-2.25b-27.fc17.src.rpm
   1:thttpd                 warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
########################################### [100%]
[root@server1 src]#
Now we download the patch to the /root/rpmbuild/SOURCES/ directory and modify the /root/rpmbuild/SPECS/thttpd.spec file accordingly:
cd /root/rpmbuild/SOURCES/
wget -O thttpd-2.25b-ipreal.patch http://www.danielclemente.com/amarok/ip_real.txt
cd /root/rpmbuild/SPECS/
vi thttpd.spec
Add the lines Patch3: thttpd-2.25b-ipreal.patch and %patch3 -p1 -b .ipreal:
[...]
Patch0: thttpd-2.25b-CVE-2005-3124.patch
Patch1: thttpd-2.25b-fixes.patch
Patch2: thttpd-2.25b-getline.patch
Patch3: thttpd-2.25b-ipreal.patch
[...]
%prep
%setup -q
%patch0 -p1 -b .CVE-2005-3124
%patch1 -p1 -b .fixes
%patch2 -p1 -b .getline
%patch3 -p1 -b .ipreal
[...]
Now we build our Thttpd rpm package as follows:
rpmbuild -ba thttpd.spec
Our Thttpd rpm package is created in /root/rpmbuild/RPMS/x86_64 (/root/rpmbuild/RPMS/i386 if you are on an i386 system), so we go there:
cd /root/rpmbuild/RPMS/x86_64
ls -l
[root@server1 x86_64]# ls -l
total 224
-rw-r--r-- 1 root root  69881 Sep  3 23:17 thttpd-2.25b-27.fc17.x86_64.rpm
-rw-r--r-- 1 root root 151685 Sep  3 23:17 thttpd-debuginfo-2.25b-27.fc17.x86_64.rpm
[root@server1 x86_64]#
Install the Thttpd package as follows:
rpm -ivh thttpd-2.25b-27.fc17.x86_64.rpm
Then we make a backup of the original /etc/thttpd.conf file and create a new one as follows:
mv /etc/thttpd.conf /etc/thttpd.conf_orig
vi /etc/thttpd.conf
# BEWARE : No empty lines are allowed!
# This section overrides defaults
# This section _documents_ defaults in effect
# port=80
# nosymlink # default = !chroot
# novhost
# nocgipat
# nothrottles
# host=0.0.0.0
# charset=iso-8859-1
host=127.0.0.1
port=8000
user=thttpd
logfile=/var/log/thttpd.log
pidfile=/var/run/thttpd.pid
dir=/var/www
cgipat=**.cgi|**.pl
This will make Thttpd listen on port 8000 on 127.0.0.1; its document root is /var/www.
Create the system startup links for Thttpd...
systemctl enable thttpd.service
... and start it:
systemctl start thttpd.service
Next create /etc/nginx/proxy.conf:
vi /etc/nginx/proxy.conf
proxy_redirect          off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
Now open your vhost configuration file...
vi /etc/nginx/conf.d/www.example.com.vhost
... and add a location /cgi-bin {} section to the server {} container:
server {
[...]
location /cgi-bin {
include proxy.conf;
proxy_pass http://127.0.0.1:8000;
}
[...]
}
Reload nginx:
systemctl reload nginx.service
Because Thttpd's document root is /var/www, location /cgi-bin translates to the directory /var/www/cgi-bin (this is true for all your vhosts, which means each vhost must place its CGI scripts in /var/www/cgi-bin; this is a drawback for shared hosting environments; the solution is to use a CGI wrapper as described in chapter 3 instead of Thttpd).
Create the directory...
mkdir /var/www/cgi-bin
... and then place your CGI scripts in it and make them executable. For testing purposes I will create a small Hello World Perl script (instead of hello_world.cgi you can also use the extension .pl -> hello_world.pl):
vi /var/www/cgi-bin/hello_world.cgi
#!/usr/bin/perl -w

# Tell perl to send a html header.
# So your browser gets the output
# rather then (command line
# on the server.)
print "Content-type: text/html\n\n";

# print your basic html tags.
# and the content of them.
print "Hello World!! \n";
print "


Hello world


\n";
chmod 755 /var/www/cgi-bin/hello_world.cgi
Open a browser and test the script:
http://www.example.com/cgi-bin/hello_world.cgi
If all goes well, you should get the following output:

Click to enlarge

3 Using Fcgiwrap

 
Fcgiwrap is a CGI wrapper that can be used for shared hosting environments because it allows each vhost to use its own cgi-bin directory.
As there's no fcgiwrap package for Fedora, we must build it ourselves. First we install some prerequisites:
yum groupinstall 'Development Tools'
yum install fcgi-devel
Now we can build fcgiwrap as follows:
cd /usr/local/src/
git clone git://github.com/gnosek/fcgiwrap.git
cd fcgiwrap
autoreconf -i
./configure
make
make install
This installs fcgiwrap to /usr/local/sbin/fcgiwrap.
Next we install the spawn-fcgi package which allows us to run fcgiwrap as a daemon:
yum install spawn-fcgi
Open /etc/sysconfig/spawn-fcgi...
vi /etc/sysconfig/spawn-fcgi
... and modify the file as follows:
# You must set some working options before the "spawn-fcgi" service will work.
# If SOCKET points to a file, then this file is cleaned up by the init script.
#
# See spawn-fcgi(1) for all possible options.
#
# Example :
#SOCKET=/var/run/php-fcgi.sock
#OPTIONS="-u apache -g apache -s $SOCKET -S -M 0600 -C 32 -F 1 -P /var/run/spawn-fcgi.pid -- /usr/bin/php-cgi"

FCGI_SOCKET=/var/run/fcgiwrap.socket
FCGI_PROGRAM=/usr/local/sbin/fcgiwrap
FCGI_USER=nginx
FCGI_GROUP=nginx
FCGI_EXTRA_OPTIONS="-M 0700"
OPTIONS="-u $FCGI_USER -g $FCGI_GROUP -s $FCGI_SOCKET -S $FCGI_EXTRA_OPTIONS -F 1 -P /var/run/spawn-fcgi.pid -- $FCGI_PROGRAM"
Create the system startup links for spawn-fcgi...
systemctl enable spawn-fcgi.service
... and start it as follows:
systemctl start spawn-fcgi.service
You should now find the fcgiwrap socket in/var/run/fcgiwrap.socket, owned by the user and group nginx.
Now open your vhost configuration file...
vi /etc/nginx/conf.d/www.example.com.vhost
... and add a location /cgi-bin {} section to the server {} container:
server {
[...]
location /cgi-bin/ {
# Disable gzip (it makes scripts feel slower since they have to complete
# before getting gzipped)
gzip off;

# Set the root to /usr/lib (inside this location this means that we are
# giving access to the files under /usr/lib/cgi-bin)
root /var/www/www.example.com;

# Fastcgi socket
fastcgi_pass unix:/var/run/fcgiwrap.socket;

# Fastcgi parameters, include the standard ones
include /etc/nginx/fastcgi_params;

# Adjust non standard parameters (SCRIPT_FILENAME)
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
[...]
}
Reload nginx:
systemctl reload nginx.service
Next we create our cgi-bin directory - /var/www/www.example.com/cgi-bin because we defined root /var/www/www.example.com; in the location /cgi-bin {} container:
mkdir /var/www/www.example.com/cgi-bin
Now we place our CGI scripts in it and make them executable. For testing purposes I will create a small Hello World Perl script (instead of hello_world.cgi you can also use the extension .pl -> hello_world.pl):
vi /var/www/www.example.com/cgi-bin/hello_world.cgi
#!/usr/bin/perl -w

# Tell perl to send a html header.
# So your browser gets the output
# rather then (command line
# on the server.)
print "Content-type: text/html\n\n";

# print your basic html tags.
# and the content of them.
print "Hello World!! \n";
print "


Hello world


\n";
chmod 755 /var/www/www.example.com/cgi-bin/hello_world.cgi
Open a browser and test the script:
http://www.example.com/cgi-bin/hello_world.cgi
If all goes well, you should get the following output:


 

4 Links


Baca Selengkapnya ....

Sharing Terminal Sessions With Tmux And Screen

Posted by Unknown Senin, 24 September 2012 0 komentar
http://www.howtoforge.com/sharing-terminal-sessions-with-tmux-and-screen


tmux and GNU Screen are well-known utilities which allow multiplexing of virtual consoles. Using either, it is possible to start a session, detach, move to a different machine and resume the session in uninterrupted progress. It's also possible to use these tools to share a single session between more than one user at the same time.

Basic Sharing with a Single Account

If an account is held jointly between two or more users, then the sharing of the terminal console is very simple. Neither tmux nor screen require anything out of the ordinary for basic sharing between a single account logged in multiple times. Basic sharing is very easy if you are both logged in as the same user.

Basic sharing with screen

In one terminal create a new session for screen, where foobar is the name of your screen session:
screen -S foobar
Then in the other terminal, attach to that session.
screen -x foobar
That's it, there were just two steps.

Basic sharing with tmux

Again, there are only two steps. In the first terminal, start tmux where shared is the session name:
tmux new-session -s shared
Then in the second terminal attach to the shared session.
tmux attach-session -t shared
That's it.

Sharing Between Two Different Accounts

Sharing between two different accounts requires some additional steps to grant the privileges necessary for one account to access another's session. In some cases, it will require help from the system administrator to prepare the setup.

Sharing between two different accounts with tmux

For different users, you have to set the permissions on the tmux socket so that both users can read and write it. There is only one prerequiste, that there be a group in common between the two users. If such a group does not exist it will be necessary to create one.
In the first terminal, start tmux where shared is the session name and shareds is the name of the socket:
tmux -S /tmp/shareds new -s shared
Then chgrp the socket to a group that both users share in common. In this example, joint is the group that both users share. If there are other users in the group, then they also have access. So it might be recommended that the group have only the two members.
chgrp joint /tmp/shareds
In the second terminal attach using that socket and session.
tmux -S /tmp/shareds attach -t shared
That's it. The session can be made read-only for the second user, but only on a voluntary basis. The decision to work read-only is made when the second user attaches to the session.
tmux -S /tmp/shareds attach -t shared -r

Sharing between two different accounts with screen

If you are logged in as two different users, there are three prerequisites to using screen. First, screen must be set SUID and it is necessary to remove group write access from /var/run/screen. The safety of using SUID in this context is something to consider. Then you must use screen's ACLs to grant permission to the second user.
sudo chmod u+s /usr/bin/screen
sudo chmod 755 /var/run/screen
In the first user's terminal, start screen as in the basic sharing above, where foobar is the name of the screen session. Then turn on multiuser mode and add user2 to the ACL, where user2 is the second account to be sharing the session.
screen -S foobar
^A:multiuser on
^A:acladd user2
The session can be made read-only for the second user by entering the following ACL change: ^A:aclchg user2 -w "#?"
Then in the other terminal, attach to the first user's session.
screen -x user1/foobar
It is also possible to put multiuser on and acladd user2 into .screenrc to have it take effect automatically upon starting screen. If the changes are not desired in all screen sessions, then a separate .screenrc configuration file can be specified by using the -c option when starting screen.

Afterthought

Be careful when exiting. If you just exit the shell, it will end the terminal session for all parties. If you instead detach the session then the other user can continue working uninterrupted. In tmux that is ^B-d and in screen that is ^A-d

Baca Selengkapnya ....

Video Art: Experimental Animation and Video Techniques in Linux

Posted by Unknown Minggu, 23 September 2012 0 komentar
http://www.linuxjournal.com/content/experimental-animation-and-video-techniques-linux


 Animation and video editing in Linux can be treacherous territory. Anyone who has tried working in these media probably has experienced the frustration of rendering a huge file for an hour only to see the program crash before the export is finished. A bevy of tools and applications for manipulating video exist for Linux, and some are more mature than others.
The most mainstream of GUI applications have been covered quite a bit in other Linux-related articles on the Web and in print, including in previous issues of Linux Journal. Some of these names may ring familiar to you: Kino, PiTiVi, Openshot, Cinelerra, Kdenlive and Open Movie Editor.
Although I refer to these nonlinear editors (NLEs) from time to time here, the main purpose of this article is to introduce some video effects and techniques you may not have thought of before. If you are producing a film or animation in a conventional format, such as a DVD or a Web video, you most likely will want to employ a suitable NLE at some point in your process. Many of the ideas I present in this article are experimental.

Video Editing

LiVES
LiVES is primarily a VJ (video jockey) tool for performing live audio-visual effects, but it also can encode and export video via its MPlayer back end. The interface has two modes: clip editor and multitrack editor. The clip editor view is more suitable for live VJ sets, while you'll probably lean toward the multitrack view if using LiVES as your NLE.
Figure 1. LiVES in the Clip Editor View
LiVES is highly extensible. In addition to the built-in effects, you can apply custom RFX (rendered/real-time effects) plugins. Several of these scripts are available for download from the LiVES Web site. You also can share LiVES' real-time effects with other applications using the frei0r effects API.
The number of options and the advanced effects in LiVES are comparable to those of Cinelerra, but I strongly recommend LiVES over the latter. Cinelerra is indeed a powerful video editor, but the interface is antiquated and difficult to use. Although LiVES can seem foreign to new users, it is not hard to become acquainted with it.
ZS4
ZS4, formerly known as Zweistein, is a unique—and quite strange—video editor and compositor. The developers of ZS4, who go by the name "t@b", are a duo of musicians who use their own software to create music videos. They are hard to pinpoint on the Web, as they use several sites for different purposes.
I admit that I was confused by the existence of both zs4.net and zs4.org, as well as the Viagra advertisement lines that appeared in Google search results at the zs4.net domain. The two sites both contain download links for ZS4 as well as some other software.
If you plan to use ZS4, I recommend downloading the t@b Media Converter and/or installing Avidemux, as ZS4 is picky about importing video files. Most videos are not compatible out of the box, so it may be necessary to convert them to a format ZS4 can work with.
Working with ZS4 can be frustrating at first because the interface is far from intuitive. Menus are not where you would expect them to be, and you might find yourself aimlessly clicking your cursor in different places to accomplish a simple task, such as dragging a media file into the timeline. The media viewing windows are vaguely labeled "rectangles". To show or hide a track, you click on + or - instead of the typical open- or closed-eye icon.
It took me years to gather the patience to learn my way around this program. So yes, the GUI needs some serious work if it ever is going to reach a mass audience, but it doesn't seem like mainstream appeal is a major concern for the eccentric-minded developers.
So, why tell you about a bizarre-looking application that hasn't been updated in years when there are plenty of other video editors for Linux? Well, for all ZS4's graphical quirks, it can accomplish some very interesting compositing effects.
Figure 2. Tiling Effects in ZS4


 Animation

GIMP
The famous GNU Image Manipulation Program can create animations as well as still images. Because it is a full-featured image editing program, you can use it to create an animation entirely from scratch.
In order to import a prepared image sequence into GIMP, click File→Open as Layers... or press Ctrl-Alt-o. The Open Image dialog allows you to select multiple files, which then will appear as layers.
Figure 3. An animation in progress that I made by tracing reference photos of faces from the Psychological Image Collection at Stirling (PICS).
In the example shown in Figure 3, I imported a series of reference photos into GIMP and traced over them in cyan and then in black. I eventually deleted the reference photos and blue layers, leaving only the black-lined drawings that I planned to use for my final animation.
To finish my animation, I exported the layers as a GIF and specified animation parameters in the export dialog. Because I wanted to use the animation in a video, I had to turn the animated GIF into a video file. I ultimately chose to do this by way of screen recording, but that is not the only option.

From Stills to Movies

Let's say you have a sequence of images, or perhaps an animated GIF, that you want to make into a video file. There are several ways to go about this.
Stopmotion
Stopmotion started as a student project under the Skolelinux/Debian-edu organization in 2005. Although it hasn't been updated since 2008, I find it to be a handy tool for anyone working with frame-by-frame animation. You might have trouble finding Stopmotion in your distribution's repositories if you aren't using a DEB- or RPM-based package manager, but you can, of course, compile it from source on any distribution; that's how I set it up in Sabayon Linux.
Stopmotion is simple and to the point, with a nice drag-and-drop interface. It's not designed for heavy post-production or for drawing and adding effects to frames. Rather, the point is to give users an easy way to arrange images sequentially and export them into a video file.
The video import and export options are limited only by your imagination (and your knowledge of the command line). If you know how to use FFmpeg and/or MEncoder to convert image sequences to video, you can pass your desired command-line arguments to Stopmotion, which is essentially a GUI for those programs. Stopmotion also gives you several choices of video capture commands for grabbing video from your Webcam or another attached device.
One cool feature I didn't know about until I read the user's handbook was the option to add sound. You can set a sound clip to start at any given frame by double-clicking on it. The audio I added to my sequence didn't play in the exported AVI, but maybe you'll have better luck.
If you want to perform more-advanced editing on your individual frames, Stopmotion has a button to open a selected frame in GIMP. You also can export your data into Cinelerra for video editing.
Figure 4. Animating a Sequence of Faces in Stopmotion

 Command Line

There are several ways to turn frames into motion via the command line.
jpegtoavi
jpegtoavi is a simple C script that does exactly what its name suggests—converts a sequence of *.jpg files into an AVI movie. If your images are not in the JPEG format, you first can convert them using the convert command from ImageMagick:

convert image.png image.jpg

If you need to convert a batch of images in a folder, ImageMagick grants you about a gazillion different methods. One of these is to cd to that directory and do:

convert *.png image.jpg

The new filenames will be numbered automatically.
Once you have your folder of sequenced JPEG files, you can employ jpegtoavi. A basic usage template from the man page is:

jpegtoavi -f fps width height img1 [img2 ... imgN]

Although jpegtoavi is nice for simple tasks, minimal documentation exists. I was surprised to find that none of the main Web sites hosting downloads of the software provided any type of wiki or instructions beyond what already was in the man page and README.
You can do more-advanced encoding with FFmpeg and MEncoder, both of which are heavily documented on-line and in their man pages. These programs both rely on libavcodec and have many overlapping uses, but the command formats are different. For this article, I cover only FFmpeg.
This will convert a folder of GIF files sequenced as "image-001", "image-002" and so forth into an MP4 movie file with a framerate of 10 frames per second and a reasonably high bitrate of 1800:

ffmpeg -r 10 -b 1800 -i image-%03d.gif movie.mp4

Make sure your files are named properly, because encoding will stop early if the program encounters a gap in the number sequence.

Animated GIFs

If you're a citizen of the Internet, you've no doubt noticed the recent proliferation of animated GIFs on sites like Tumblr.com. Now that more people have access to high-bandwidth network connections, the GIF art form is not so limited in resolution and number of frames as it was in the 1990s when tiny GIF animations originally rose to popularity in Geocities and Angelfire home pages. Modern GIF animations often display entire scenes from movies.
So, are you ready to pimp out some mad GIF skills?
With ImageMagick, it's easy to fashion an animated GIF from a sequence of non-GIF images:

cd /path/to/image/folder ; convert *.jpg animation.gif

The mother of all command-line GIF manipulation programs though is Gifsicle. Your images must already be in the GIF format to use it.
To create a GIF that animates just once, do:

gifsicle image1.gif image2.gif image3.gif > animation.gif

In most cases, you'll want your animated GIF to loop endlessly. You also may want to specify parameters, such as framerate. Try this for a dithered animation that loops at 10 frames per second:

gifsicle --loopcount=0 --delay 10 --dither image1.gif image2.gif image3.gif > animation.gif

You also can use Gifsicle in reverse mode—that is, to extract the individual frames from an animated GIF. Just use the --explode argument:

gifsicle --explode animation.gif

Now, go out (and by "out", I mean to your nearest terminal) and explore all the neat tricks you can do with Gifsicle!
Here's one more to wet your feet:
Take a ready-made animated GIF with a white background and make it transparent:

gifsicle --transparent '#FFFFFF' --disposal 2 animation.gif > animation-transparent.gif

 Abstraction

For most of us, the notion of animation brings to mind deliberate, structured sequences. In this section, I introduce some less-traditional ways of creating mind-blowing, computer-generated effects without having to know languages like Processing or Pure Data (both of these are very powerful, but not everyone who wants to animate knows how to code).
In my own work with video, screen recording tools have been indispensable. Sometimes I use them to capture animations I make in Pencil, because the movie export feature is broken in the version I use. Other times, I just want to capture some cool imagery on my screen without worrying about proprietary copyrights, so I take screen recordings of free software.
My preferred screen recorder is the bare-bones, command-line version of recordMyDesktop. Your distribution's repositories also might provide the graphical front ends GTK-recordmydesktop and QT-recordmydesktop, but I find those to be buggy and prone to crashes when recording long scenes. You can record your entire screen with:

recordmydesktop screenrecording.ogv

The recording will start as soon as you enter the command, and it will stop when you press Ctrl-c. Read the man page for more options, such as recording a specific window (tip: find a window's ID with xwininfo).
Electric Sheep
If you aren't familiar with the trippiest screensaver in the world, go on-line and look up some images of Electric Sheep. The software artist Scott Draves created Electric Sheep as a dynamic, collaborative fractal flame animation that runs on and by thousands of computers worldwide. Networked data determines the mutations of the various "sheep" in the animation, and users can vote on and contribute their own sheep. And because it's all free, anyone can use the images generated in this android dream.
So how do you take a screen recording of a screensaver? Well, guess what: Electric Sheep is a binary. Just enter electricsheepinto your terminal and watch the magic in MPlayer.
If you want to create your own sheep, check out the program Qosmic.
Figure 5. Using recordMyDesktop to Capture Electric Sheep
XaoS
XaoS is a real-time, interactive fractal zoomer that will capture the eye of mathematicians and VJs alike. You can change the fractal formulae and colors with many different parameters and filters. My favorite is the Pseudo-3D filter, which extrudes lines to generate what looks like a surreal landscape. Using the left and right mouse buttons, you can zoom in and out as if flying a plane over the "terrain".
Figure 6. XaoS with the Pseudo-3D Filter Applied
Fyre
Fyre is a program that generates and animates Peter de Jong maps. You don't need a screen recorder to make animations with this; you can enter key frames and render an AVI file directly from the program. As you can see from the screenshot shown in Figure 7, Peter de Jong maps make for some neat, abstract images.
Figure 7. Animating in Fyre

Alphas and More to Look Out For

Unfortunately, there is not enough space in this article or in my brain to cover all the new video-related Linux software that's in development. In lieu of a complete list, I'll provide you with the names of a few projects that I expect to be worth checking out for both developers and end users.
Auteur
Auteur is one cool new kid on the block. I first heard of this project in an episode of the podcast "The Bad Apples" (which has since been re-branded as "GNU World Order"), produced by Seth Kenlon, aka Klaatu, who is also a developer on the Auteur team. Klaatu noted the absence of a truly solid nonlinear video editor for Linux, so he set out to make one with all the features he felt existing software was lacking. The Web site currently says that the project is frozen due to lack of programmers—so programmers, why not help out with a promising alpha?
Figure 8. Testing Out Auteur
VLMC
The folks behind the VLC media player have a nascent project called VLMC (VideoLAN Movie Creator). The latest releases are still basic and not quite stable, but knowing what the VideoLAN team is capable of, I am sure this will mature into a serious nonlinear video editor. They currently are looking for contributors.
Pencil
Pencil is a traditional 2-D animation program, which, although still in beta, already fills a gaping hole in the sphere of Linux animation tools. It allows drawing in both vector and bitmap formats as well as importing images and sounds. My trials with Pencil have been basic but mostly satisfactory, although the video export feature appears broken in Linux. I have compensated for that and made some cool videos anyway simply by taking a screen recording during animation playback in Pencil. There is an active community of Pencil users who post animations on the Pencil Web site's gallery. Pencil is similar to Synfig Studio, but I find the interface easier to navigate.
Figure 9. An Animation I Made in Pencil
Puredyne
Puredyne is a multimedia Linux distribution based on Ubuntu and Debian Live, specifically designed for real-time audio-visual processing. Many of the tools and APIs I haven't had the verbal real estate to cover in this article (such as FreeJ, Gephex and DataDada) are included either in the distribution itself or in optional modules.
And, there you have it, animators and filmmakers. I hope this article inspires a cool music video or two!

Resources

LiVES: http://lives.sourceforge.net
Frei0r: http://piksel.org/frei0r
ZS4 Video Compositing: http://zs4.org
t@b Software: http://www.thugsatbay.com/tab/?q=software
ZS4 Download: http://zs4.net/download
GIMP: http://www.gimp.org
Psychological Image Collection at Stirling (PICS): http://pics.psych.stir.ac.uk
Stopmotion: http://stopmotion.bjoernen.com
JPEG to MJPEG-AVI Converter: http://sourceforge.net/projects/jpegtoavi
FFmpeg: http://ffmpeg.org
Gifsicle: http://www.lcdf.org/gifsicle
recordMyDesktop: http://recordmydesktop.sourceforge.net
Electric Sheep: http://www.electricsheep.org
GNU XaoS: http://wmi.math.u-szeged.hu/xaos/doku.php
Fyre: http://fyre.navi.cx
Auteur Non-Linear Editor: http://auteur-editor.info
VLMC: http://www.videolan.org/vlmc
Pencil—Traditional Animation Software: http://www.pencil-animation.org
Puredyne: http://www.puredyne.org 


Baca Selengkapnya ....
Trik SEO Terbaru support Online Shop Baju Wanita - Original design by Bamz | Copyright of android blackberry.