What is new in kernel 2.6.38
A quite minor change in the process scheduler makes systems with 2.6.38 feel much faster, and more far-reaching changes to the VFS (virtual file system) make some tasks much faster. Some of the changes to driver code that deserve mention include Wireless LAN (WLAN) drivers and expanded support for current graphics chips from AMD and Nvidia.
Friday, March 25, 2011
Red Hat Releases Beta of First Update to Red Hat Enterprise Linux 6Red Hat Enterprise Linux 6 takes another step forward with the availability of the beta for the first update to the platform.
Red Hat delivered its latest major operating platform release Red Hat Enterprise Linux 6. in November 2010, A new standard of flexibility, efficiency
and control for customers’ commercial open source environments. The
release included features applicable to all computing environments —
from physical to virtual to the cloud – with improvements in
performance, scalability and reliability. Red Hat Enterprise Linux 6 takes another step forward with the availability of the beta for the first update to the platform. The beta includes new features, bug fixes and support for new hardware from our key partners
Labels: debian,windows,centos,tool
redhat
Nagwin - Nagios for Windows
Nagios for windows is now available for those users who want to run Nagios on their windows environment. Tevfik Karagulle produced a distribution package of Nagios Core for Windows systems using Cygwin.
You can download the package at sourceforge :
Download here Nagwin
Download NagwinDown
Download NagwinDown
Labels: debian,windows,centos,tool
nagios
Wednesday, March 23, 2011
Open source : the electronic equivalent of generic drugs
Open source: the electronic equivalent of generic
drugs
Like the generic drugs that have
transformed health care provision in the South, open source software is royalty
and license free, and is therefore substantially cheaper to acquire than branded
alternatives. The reason for this is that open source software is developed by
volunteer collectives who are not seeking to profit from its
sale.
In addition, just as the recipe for generic drugs is made
public, so the source code or inner workings of open source software is
accessible to the user. Any qualified person can see exactly how the software
works and can easily make changes to the functionality.
- IICD
Open Source in Africa Briefii
Labels: debian,windows,centos,tool
open source
Tuesday, March 22, 2011
The MeeGo Lifebook Fujitsu MH330 has landed
MeeGo is an emerging open source operating system for Web-centric
mobile devices that now cover netbooks, slates and smartphones.
Similar to the iPhone OS and Android OS, the Linux-based MeeGo is a
light and fast operating system that enables rich user experience via
graphics enhanced widgets or apps that hook users instantly to social
networking sites and multimedia content online.
Recently, Fujitsu launched its Lifebook MH330—the first MeeGo netbook based on the Intel Atom processor.
The installed MeeGo OS on Lifebook MH330 isn’t the stock, or the pure,
version that can be downloaded on the MeeGo website. Fujitsu has made a
lot of customization on the platform to make it feature Fujitsu’s
trademark.
Labels: debian,windows,centos,tool
fujitsu
hping
hping is a command-line oriented TCP/IP packet assembler/analyzer.The interface is inspired to the ping(8) unix command, but hping isn't
only able to send ICMP echo requests. It supports TCP, UDP, ICMP and
RAW-IP protocols, has a traceroute mode, the ability to send files
between a covered channel, and many other features.
Labels: debian,windows,centos,tool
hping
Friday, March 18, 2011
WhisperCore brings device-level encryption to Android
Whisper Systems, the developers of the RedPhone voice encryption and TextSecure SMS encryption systems for Android phones, has now released WhisperCore. The software is a device-level encryption system that is intended to protect all the data on a user's Android phone.
Whisper Systems CTO and co-founder, Moxie Marlinspike, told CNET
that WhisperCore "uses AES with 256-bit keys in XTS mode, the same disk
encryption protocol that's proven itself in the PC space with tools
like TrueCrypt or LUKS (Linux Unified Key Setup)".
This first release is an early beta labelled version 0.1 and is
described as a tech-demo. This release is only currently available for Nexus S
phones but is expected to expand to other devices soon. WhisperCore
will be available for free for individual use with pricing for
commercial use dependent on deployment size.
The web site currently available states that the system integrates
with the Android operating system and protects all the data and programs
on the phone. It includes full-disk encryption and can be set so as to
also protect data held on the phone's SD card.
The WhisperCore Beta is available to download
on the Whisper Systems web site. Three installers are available, for
64-bit Linux, Mac OS X and 64-bit Windows. As a beta it should not be
used where security or stability is important.
PHP 5.3.6 Released!
The PHP development team would like to announce the immediate
availability of PHP 5.3.6. This release focuses on improving the
stability of the PHP 5.3.x branch with over 60 bug fixes, some of which
are security related.
All PHP users should note that the PHP 5.2 series is NOT supported anymore. All users
are strongly encouraged to upgrade to PHP 5.3.6.
Labels: debian,windows,centos,tool
php
Controversy surrounds Red Hat's "obfuscated" source code release
Red Hat has changed the way it ships the source code for the Linux
kernel. Previously, it was released as a standard kernel with a
collection of patches which could be applied to create the source code
of the kernel Red Hat used. Now though, the company ships a tarball of
the source code with the patches already applied. This change, noted by Maxillian Attems and LWN.net,
appears to be aimed at Oracle, who like others, repackage Red Hat's
source as the basis for its Unbreakable Linux. Removing the visibility
of information about which patches have, or have not, been applied will
be difficult for companies like Oracle who use the patch information so
they know what state the Red Hat kernel is in before applying their own
patches.
The changes do not appear to violate the GPL version 2
which requires redistribution of source code where the "source code for
a work" is defined as the "preferred form of the work for making
modifications to it". The term applies to the entirety of the work, in
this case the Linux kernel and Red Hat is shipping that work in its
entirety.
When queried about the change, officially Red Hat had no comment,
but sources with an intimate knowledge of Red Hat's operations
confirmed to The H that the obfuscation was taking place and it was a
deliberate move to make business harder for downstream consumers of Red
Hat's Enterprise Linux source code, specifically Oracle.
Reports suggest that Red Hat's own engineers are also upset by the
move. Others outside the company are more blunt; Attems, for example,
said "Red Hat should really step back and not make such stupid
management moves". Although targeted at Oracle, the changes will make
work harder for distributions such as CentOS,
the community built Linux distribution also based on Red Hat's sources.
CentOS is built from the RHEL source by a limited number of volunteers
and Red Hat's change in policy will mean more work for them unless more
volunteers or other companies step in and provide them with assistanceRead the article
Labels: debian,windows,centos,tool
news
Thursday, March 17, 2011
New Linux kernel goes faster
The newest update to the Linux operating-system kernel features a number
of enhancements that should offer a performance boost, particularly for
running databases and other programs that require maximum resources
from the server.
Linux
2.6.38 comes with a number of significant changes that should speed
performance, including the addition of new technologies such as
automatic process grouping and transparent huge pages. It also includes
significant improvements in the VFS (virtual file system).
With
automatic process grouping, the process scheduler groups all processes
with the same session ID as a single entity. A single program can spawn
multiple processes on a computer, which may then take up more resources
than necessary. Advocates say
that the process-grouping approach will allow programs to divide the
processor time more equitably, resulting in improved performance
overall.
Transparent huge pages increases the cache size for
storing frequently consulted memory addresses, called pages.
Traditionally, page sizes have been limited to 4KB, though modern
processors support larger sizes. With larger page sizes, heavier
workloads such as database work can use the cache more often, reducing their execution times.
VFS
has been made more scalable. Its directory cache and path lookup
mechanisms have been revamped, which should make multithreaded workloads
more scalable and single-threaded workloads execute faster. Torvalds
noted that of all the updates in this release, "my personal favorite
remains the VFS name lookup changes."
Beyond performance enhancements, the updated kernel features a number of other new features as well.
Read the article
Wednesday, March 16, 2011
PinguyOS is now out!
Pinguy OS is an optimise build of Ubuntu 10.10 Minimal CD
with added repositories, tweaks and enhancements that can run as a Live
DVD or be installed. It has all the added packages needed for video,
music and web content e.g. flash and java, plus a few fixes as well. Like fixing the wireless problems, gwibber’s Facebook problem and flash videos in full-screen.
PinguyOS can be downloaded here :
PinguyOS 32 bit
PinguyOS 64 bit
Labels: debian,windows,centos,tool
linux
Security: Fortifying Android Market Application Security: 11 Ways to Do It
Google faced one of its more serious attacks when developers laced
58 applications in the Android Market with malicious code. The
programs, which Google quickly removed March 1, were intended to grab
codes that identify mobile devices and determine the OS version running
on a device. Google not only notified police of the attacks and
suspended the developer accounts responsible for the suspicious
"DroidDream" malware, but took the unusual step of engaging
its kill switch. That is, the search engine remotely removed the
offending applications from users' devices. It’s only the second time
Google has taken such a step. As an open-source platform where Google
lets developers write code with great freedom and flexibility, Android
is an ideal target for malicious developers and hackers attempting to
dupe people or simply mess around with the Android Market applications.
Security experts weighed in with their thoughts on the matter. For this
slide show, eWEEK talked to some of those experts, including software
developers from security firms and analysts, to learn how Google can
improve security in its Android Market for mobile phone and tablet
users.
Reference
Reference
Labels: debian,windows,centos,tool
android
Cell phones are 'Stalin's dream,' says free software movement founder
Nearly three decades into his quest to rid the world of
proprietary software, Richard Stallman sees a new threat to user
freedom:
smartphones.
"I don't have a cell phone. I won't carry a cell phone," says
Stallman, founder of the free software movement and creator
of the GNU operating system. "It's Stalin's dream. Cell phones are
tools of Big Brother. I'm not going to carry a tracking
device that records where I go all the time, and I'm not going to
carry a surveillance device that can be turned on to eavesdrop."
Stallman firmly believes that only free software can save us from our technology, whether it be in cell phones, PCs, tablets
or any other device. And when he talks about "free," he's not talking about the price of the software -- he's talking about
the ability to use, modify and distribute software however you wish.
Stallman founded the free software movement in the early- to mid-1980s with the creation of the GNU project and the Free Software Foundation, of which he is still president.
Labels: debian,windows,centos,tool
open source
59 Open Source Tools That Can Replace Popular Security Software
It's been about a year since we last updated our list of open source
tools that can replace popular security software. This year's list
includes many old favorites, but we also found some that we had
previously overlooked.
In addition, we added a new category -- data loss prevention apps. With
all the attention generated by the WikiLeaks scandal, more companies are
investing in this type of software, and we found a couple of good open
source options.
Thanks to Datamation readers for their past suggestions of great open
source security apps. Feel free to suggest more in the comments section
below.
Anti-Spam
1. ASSP Replaces: Barracuda Spam and Virus Firewall, SpamHero, Abaca Email Protection Gateway
ASSP (short for "Anti-Spam SMTP Proxy") humbly calls itself "the
absolute best SPAM fighting weapon that the world has ever known!" It
works with most SMTP servers to stop spam and scan for viruses (using
ClamAV). Operating System: OS Independent.
2. MailScanner
Replaces: Barracuda Spam and Virus Firewall, SpamHero, Abaca Email Protection Gateway
Used by more than 100,000 sites, MailScanner leverages Apache's
SpamAssassin project and ClamAV to provide anti-spam and anti-virus
capabilities. It's designed to sit on corporate mail gateways or ISP
servers to protect end users from threats. Operating System: OS
Independent.
3. SpamAssassin
Replaces: Barracuda Spam and Virus Firewall, SpamHero, Abaca Email Protection Gateway
This Apache project declares itself "the powerful #1 open-source spam
filter." It uses a variety of different techniques, including header and
text analysis, Bayesian filtering, DNS blocklists, and collaborative
filtering databases, to filter out bulk e-mail at the mail server level.
Operating System: primarily Linux and OS X, although Windows versions
are available.
This group of tools uses Bayesian filters to identify spam based on
keywords contained in the messages. It includes an Outlook plug-in for
Windows users as well as a number of different versions that work for
other e-mail clients and operating systems. Operating System: OS
Independent.
Anti-Virus/Anti-Malware
5. ClamAV Replaces Avast! Linux Edition, VirusScan Enterprise for Linux
Undoubtedly the most widely used open-source anti-virus solution, ClamAV
quickly and effectively blocks Trojans, viruses, and other kinds
malware. The site now also offers paid Windows software called
"Immunet," which is powered by the same engine. Operating System: Linux.
If you're looking for a free version of Clam for Windows, this is the
way to go. It's used by more than 600,000 people on a daily basis and
integrates with Outlook and Windows Explorer. Note however, that it
doesn't have an automatic real-time scanner—you have to click on
individual files in order to scan them. Operating System: Windows.
Anti-Spyware
7. Nixory Replaces Webroot Spy Sweeper, SpyBot Search and Destroy, AdAware
Nixory removes malicious cookies that you might have picked up while
browsing the Web with Internet Explorer, Firefox or Chrome. The latest
release includes a lightweight real-time scanner that deletes cookies
while you surf. Operating System: OS Independent.
Application Firewall
8. AppArmor Replaces: Barracuda Web Application Firewall, Citrix NetScaler Application Firewall,
Included in both openSUSE and SUSE Linux Enterprise, Novell's
application firewall aims to secure Linux-based applications while
lowering IT costs. Key features include reports, alerts, sub-process
confinement, and more. Operating System: Linux.
The "most widely deployed WAF (Web Application Firewall) in existence,"
ModSecurity protects applications running on the Apache Web server. It
also monitors, logs, and provides real-time analysis of Web traffic.
Operating System: Windows, Linux.
Backup
10. Areca Backup Replaces: NovaBackup
Designed to be both simple and versatile, Areca lets you choose which
files to back up, set up a schedule and determine what type of backup to
perform (incremental, differential, full or delta). Notable features
include compression, encryption, as-of-date recovery and more. Operating
System: Windows, Linux.
11. Bacula
Replaces: Simpana Backup and Recovery , NetVault, HP StorageWorks EBS
Enterprise-ready Bacula backs up multiple systems connected to a
network. Users often say that it is easier to set up than similar
commercial programs, and it can write to many different types of storage
media. Operating System: Windows, Linux, OS X.
12. Amanda
Replaces: Simpana Backup and Recovery, NetVault, HP StorageWorks EBS
The "most popular open source backup and recovery software in the
world," Amanda backs up the data from more than half a million desktops
and servers. In addition to the free community version, it's also
available in a supported enterprise version, as an appliance or in the
cloud through Zmanda. Operating System: Windows, Linux, OS X.
13. Partimage
Replaces: Norton Ghost, NovaBackup, McAfee Online Backup, Carbonite.com
Partimage is particularly useful if you need to recover from a complete
system crash or if you need to install multiple images across a network.
It's very fast and can restore to a partition on a different system.
Operating System: Linux.
Browser Add-Ons
14. Web of Trust (WOT) Replaces: McAfee SiteAdvisor Plus
Web of Trust describes itself as "the world's leading community-based,
free safe surfing tool." It's very similar to SiteAdvisor, providing a
traffic light-like symbol that shows you the trustworthiness of a site
before you click. It works with all major browsers, including Firefox,
Internet Explorer, Chrome, Safari and Opera. Operating System: Windows,
Linux, OS X.
15. PasswordMaker
Replaces Kaspersky Password Manager,
Roboform
If you struggle to create and remember unique passwords for all the
sites and services you use, PasswordMaker can help. With this tool, you
only need to remember one master password. And unlike other password
management systems, this plug-in doesn't save your passwords in a
database anywhere, so it's even more difficult for someone to figure out
your login credentials. Operating System: Windows, Linux, OS X.
Data Removal
16. BleachBit Replaces Easy System Cleaner
BleachBit frees up extra space on your hard drive while protecting your
privacy by erasing your cookies, temporary files, history, logs and
other junk. It also includes a "shredder" that completely erases all
traces of files you have deleted. Operating System: Windows, Linux.
17. Eraser
Replaces BCWipe Enterprise
Just because you've deleted a file doesn't mean it's actually gone from
your system. Eraser thoroughly eliminates data you don't want by writing
over it several times with random information. Operating System:
Windows
18. Wipe
Replaces BCWipe Enterprise
Very similar to Eraser, Wipe provides the same functionality for Linux
users. This site also provides a little bit more technical detail about
the process in case you're curious about how it works and want to drill
down into the geeky details. Operating System: Linux.
19. Darik's Boot and Nuke
Replaces Kill Disk, BCWipe Total WipeOut
Before you recycle or donate old systems, it's a good idea to delete all
the data on your drives. Darik's Boot and Nuke (DBAN for short) shreds
all data on any drives it can detect. Operating System: OS Independent.
Data Loss Prevention
20. OpenDLP Replaces RSA Data Loss Prevention Suite, CheckPoint DLP Software Blade, Symantec Data Loss Prevention Product Family
OpenDLP scans your network and identifies sensitive data at rest on your
Windows systems. In includes both a Web app, which lets system
administrators or compliance officers deploy the tool and view reports,
and a client, which runs inconspicuously on end users' systems.
Operating System: Windows.
21. MyDLP
Replaces RSA Data Loss Prevention Suite, CheckPoint DLP Software Blade, Symantec Data Loss Prevention Product Family
The creators of MyDLP strongly imply that if the U.S. government had
installed their software, it could have prevented the WikiLeaks scandal.
It detects and protects sensitive data from being transmitted, and it
installs in just 30 minutes. Operating System: Windows, Linux, VMware.
Encryption
22. AxCrypt Replaces McAfee Anti-Theft, CryptoForge
The "leading open source file encryption software for Windows," AxCrypt
has been registered by more than 2.1 million users. It's particularly
easy to use—simply right-click to encrypt and double-click to de-crypt.
Operating System: Windows.
23. Gnu Privacy Guard
Replaces PGP Universal Gateway Email Encryption, Cypherus
Based on OpenPGP, "GPG" allows users to encrypt and sign digital
communication. This is a command-line version, but several other
projects offer graphical implementations of the same engine (see below).
Operating System: Linux.
24. GPGTools
Replaces , Cypherus
This is a nice version of GPG for Mac users. Operating System: OS X.
And, as you probably guessed, this is a version of GPG for Windows. This
one comes with excellent documentation. Operating System: Windows.
Technically, PeaZip isn't an encryption tool; instead, like WinZip it's a
compression and archiving tool. However, like WinZip, PeaZip includes
encryption capability, and PeaZip reads and writes more formats than its
commercial counterpart. Operating System: Windows, Linux.
27. Crypt
Replaces McAfee Anti-Theft, CryptoForge
Lightweight and ultra-fast, Cyrpt encrypts and decrypts Windows files
with minimal fuss. In fact, you don't even have to install it on your
system in order to use it. Operating System: Windows.
28. NeoCrypt
Replaces McAfee Anti-Theft, CryptoForge
Like AxCrypt, NeoCrypt supports right-click encryption directly from
Windows Explorer (however, it does not support Windows 7). It offers
users a choice of 10 different encryption algorithms and includes batch
encryption capabilities. Operating System: Windows.
29. LUKS/cryptsetup
Replaces PGP Whole Disk Encryption
"Linux Unified Key Setup" or "LUKS" provides a standard format for hard
disk encryption that works on all Linux distributions. The cryptsetup
project makes LUKS usable on the desktop. Operating System: Linux.
30. FreeOTFE
Replaces PGP Whole Disk Encryption
This tool creates virtual disks on your system that encrypt all data
stored there. It's easy to use, and can even be run from a thumb drive.
Operating System: Windows.
31. TrueCrypt
Replaces PGP Whole Disk Encryption
If you want to encrypt your entire drive or a partition of a drive (not
just a few files or folders), TrueCrypt does the job for you. Its
popularity continues to grow, and it has now been downloaded more than
17 million times, up from around 14 million downloads a year ago.
Operating System: Windows.
Secure File Transfer
32. WinSCP Replaces CuteFTP, FTP Commander
Downloaded more than 40 million times as of last November, WinSCP is a
very popular SFTP, FTP, and SCH client. Note that it offers a file
transfer client only (no server version). Operating System: Windows.
33. FileZilla
Replaces CuteFTP, FTP Commander
If you'd like to set up your own SFTP, FTP or FTPS file server,
FileZilla makes it easy. It also offers a client version of the software
. Note that while the client version works on all operating systems,
the server is for Windows only. Operating System: Windows, Linux, OS X.
Forensics
34. ODESSA Replaces EnCase Forensics, X-ways Forensics, AccessData Forensic Toolkit
Although it hasn't been updated in several years, the Open Digital
Evidence Search and Seizure Architecture, aka "ODESSA," offers several
different tools that can be useful in analyzing digital evidence and
reporting on findings. The site also offers several white papers related
to the topic. Operating System: Windows, Linux, OS X.
35. The Sleuth Kit/Autopsy Browser
Replaces EnCase Forensics, X-ways Forensics, AccessData Forensic Toolkit
The Sleuth Kit includes a set of digital investigation tools that run
from the command line. For those that prefer a graphical interface, the
Autopsy Browser provides a front-end to the tools. Operating System:
Windows, Linux, OS X.
Gateway/Unified Threat Management Appliances
36. Endian Firewall Community Replaces: Check Point Security Gateways, SonicWall, Symantec Web Gateway
With Endian Firewall Community, you can turn any PC into a Unified
Threat Management appliance. It includes firewall, antivirus, anti-spam,
content filtering and a VPN. The company also sells pre-configured
appliances and supported versions of the software. Operating System:
Linux.
37. Untangle Lite
Replaces: Check Point Security Gateways, SonicWall, Symantec Web Gateway
Like Endian, Untangle offers free software that you can use to create
your own multi-function Unified Threat Management appliance. Untangle
also offers preconfigured appliances, as well as paid versions of the
software with support and additional features. Operating System: Linux.
38. ClearOS
Replaces: Check Point Security Gateways, SonicWall, Symantec Web Gateway
Designed for smaller organizations, ClearOS combines network server
functionality with a gateway appliance. In addition to anti-spam,
anti-virus and the other usual assortment of security software, it
includes multi-WAN, groupware, database, Web server software and more.
Support and additional services are available for a fee. Operating
System: Linux.
39. NetCop UTM
Replaces: Check Point Security Gateways, SonicWall, Symantec Web Gateway
NetCop describes itself as "an identity-based UTM with stateful
inspection firewall, antivirus, web cache, content filter, IPS/IDS,
WANLink load balancer, bandwidth limiter, anonymous proxy blocker, WiFi
hotspot manager, SSL VPN manager, and much more!" It's free for up to
five concurrent users or available in paid SME or Enterprise versions.
Operating System: Linux.
Intrusion Detection
40. Open Source Tripwire Replaces Tripwire
Tripwire alerts IT when changes have been made to specific files
connected to the network, helping them to detect intrusions. The
standard version of Tripwire is no longer an open source project, but
the community-developed version is based on the original project code.
Operating System: Windows, Linux.
Another File Integrity Checker, or AFICK, offers very similar
functionality to Tripwire. It was designed to be portable and
easy-to-install. Operating System: Windows, Linux.
Network Firewalls
42. IPCop Replaces Barricuda NG Firewall, Check Point Appliances
Designed for home or home office users, IPCop turns any basic PC into a
Linux-based firewall to protect your network. It can be accessed and
maintained via a Web interface and includes some good documentation, so
it's fairly easy to use. Operating System: Linux.
43. Devil-Linux
Replaces Barricuda NG Firewall, Check Point Appliances
Originally designed as another Linux-based network firewall, Devil-Linux
can now also serve as an application server. It can boot and run from a
CD-ROM or a USB thumb drive. Operating System: Linux.
44. Turtle Firewall
Replaces Barricuda NG Firewall, Check Point Appliances
This IPtables firewall also lets you create your own network firewall
from an existing PC. To set it up, you can either edit an XML document
directly or use an easy Web-based interface. Operating System: Linux.
45. Shorewall
Replaces Barricuda NG Firewall, Check Point Appliances
Also known as "Shoreline Firewall," Shorewall provides a tool for
configuring Netfilter. You can use it to create your own network
firewall or gateway appliance or to protect a standalone Linux system.
Operating System: Linux.
46. Vuurmuur
Replaces Barricuda NG Firewall
This iptables-based firewall can be used to create simple or very
complex firewall configurations. Key features include remote
administration via SSH, traffic shaping and powerful monitoring
capabilities. Operating System: Linux.
47. m0n0wall
Replaces Barricuda NG Firewall
Like most of the other apps in this category, m0n0wall allows you to
create your own firewall, but unlike most of the other firewalls here,
this one runs on FreeBSD, not Linux. It occupies just 12MB and can be
loaded from a compact flash card or a CD. Operating System: FreeBSD.
48. pfSense
Replaces Barricuda NG Firewall
This project is a fork of m0n0wall. While m0n0wall was created to be
used on embedded hardware, pfSense was designed to make it easier to use
on a full PC. It's been downloaded more than 1 million times and
protects networks of all sizes from home users to large corporations.
Operating System: FreeBSD.
49. Vyatta
Replaces Cisco products
Vyatta actively markets its products as an alternative to Cisco, and even offers a comparison chart
on its site. The "core" open source software can be used to create your
own firewall/networking appliances, or you can purchase supported
versions of the software or pre-built hardware appliances. Operating
System: Linux.
Network Monitoring
50. Wireshark Replaces: OmniPeek, CommView
The self-proclaimed "world's foremost network protocol analyzer,"
Wireshark has won quite a few awards and become a standard in the
industry. It allows users to capture and view the traffic on their
networks. Operating System: Windows, Linux, OS X.
51. tcpdump/libpcap
Replaces: OmniPeek, CommView,
These command line tools provide packet capture (libpcap) and analysis
(tcpdump) capabilities. It's a powerful tool, but not particularly
user-friendly. Operating System: Linux.
WinDump ports the tcpdump tools so they can be used on Windows systems.
The project is managed by the same company that owns Wireshark.
Operating System: Windows.
Password Crackers
53. Ophcrack Replaces Access Data Password Recovery Toolkit, Passware
For those occasions when passwords can't be recovered any other way,
Ophcrack can help systems administrators figure out lost passwords. It
uses the rainbow tables method to crack passwords, and it can run
directly from a CD. Operating System: Windows.
John the Ripper excels at cracking weak Unix passwords. To use it,
you'll need a list of commonly used passwords. You can buy password
lists or enhanced versions of the software from the site. Operating
System: Windows, Linux, OS X.
Password Management
55. KeePass Password Safe Replaces Kaspersky Password Manager
Instead of struggling to remember dozens of different passwords or, even
worse, using the same password all the time, you can remember just one
master password while KeePass stores the rest in a secure database. It's
lightweight and easy-to-use, so it won't slow you down. Operating
System: Windows.
56. KeePassX
Replaces Kaspersky Password Manager
Originally, this project ported KeePass so that it could be used with
Linux. Now, it supports multiple operating systems and adds a few
features not in the original KeePass. Operating System: Windows, Linux,
OS X.
57. Password Safe
Replaces Kaspersky Password Manager
Password Safe offers the same functionality as KeePass, plus you can
create multiple databases for different types of passwords or different
people who use the same system. It's also available in a thumb-drive
version for a fee. Operating System: Windows.
User Authentication
58. WiKID Replaces Entrust IdentityGuard, Vasco Digipass, RSA's SecurID
Designed to be less-expensive than solutions that require hardware
tokens, WiKID uses software tokens to provide two-factor authentication.
In addition to the free community version, it's also available in an
enterprise version that's priced per user. Operating System: OS
Independent.
Web Filtering
59. DansGuardian Replaces McAfee Family Protection NetNanny, CyberPatrol DansGuardian runs on a Linux or OS X server to block objectionable content from any PC connected to the network (including Windows PCs). It uses URL and domain filtering, content phrase filtering, PICS filtering, MIME filtering, file extension filtering and POST limiting to block pornography and other content that you don't want your children or employees accessing. Operating System: Linux, OS X.
Labels: debian,windows,centos,tool
open source
First release of LibreOffice arrives with improvements over OOo
The Document Foundation (TDF) has announced the availability of
LibreOffice 3.3, the first official stable release of the open source
office suite. It introduces a number of noteworthy new features and
there are improvements throughout the included applications. More
significantly, the release reflects the growing strength of the nascent
LibreOffice project.
TDF was founded last year
when a key group of OpenOffice.org (OOo) contributors decided to form
an independent organization to develop a community-driven fork of OOo.
The move was necessitated by Oracle's failure to address the governance
problems that had plagued OOo under Sun's leadership, particularly the
project's controversial copyright assignment policies. Oracle's
acquisition of Sun and subsequent mismanagement of Sun's open source
assets have created further uncertainty about the future of OOo and the
sustainability of its community under Oracle's stewardship.
TDF got off to a good start and has attracted
a lot of enthusiasm from former OOo contributors; Google, Red Hat,
Canonical, and Novell are among its corporate supporters. The
development effort so far has been reasonably productive. Contributors
have been able to enhance LibreOffice with features that Sun had
resisted accepting upstream, including parts of Novell's popular Go-OOo
patch set. The LibreOffice developers have also incorporated significant
improvements taken from the OpenOffice.org 3.3, which hasn't yet been
officially released.
The new features included in LibreOffice 3.3 improve the office
suite's feature set, usability, and interoperability with other formats.
For example, it has improved support for importing documents from Lotus
Word Pro and Microsoft Works. Another key new feature is the ability to
import SVG content and edit SVG images in LibreOffice Draw.
Navigation features in Writer have been improved, the thesaurus got
an overhaul, and the dialogs for printing and managing title pages got
major updates. LibreOffice Calc touts better Excel interoperability and
faster Excel file importing. The maximum size of a Calc spreadsheet has
increased to 1 million rows.
In addition to delivering feature improvements, the LibreOffice
developers have also focused heavily on code clean-up efforts with the
hope of reducing legacy cruft, thus making the code easier to maintain
and extend. Progress has been made, but the effort is still ongoing.
Due to the strong backing by the Linux community, the LibreOffice
fork will likely be bundled in upcoming versions of several major Linux
distributions. It's already planned for inclusion in Ubuntu 11.04, which
is coming in April.
LibreOffice 3.3 is available to download from the project's official website, with support for Linux, Windows, and Mac OS X. The source code can be found in the official LibreOffice version control repository, which is hosted on FreeDesktop.org.
Labels: debian,windows,centos,tool
LibreOffice
Tuesday, March 15, 2011
Japan: New radiation leaks harmful to health
SOMA, Japan – Radiation is spewing from damaged reactors at a
crippled nuclear power plant in tsunami-ravaged northeastern Japan in a
dramatic escalation of the 4-day-old catastrophe. The prime minister has
warned residents to stay inside or risk getting radiation sickness.
Chief Cabinet Secretary Yukio Edano said Tuesday that
a fourth reactor at the Fukushima Dai-ichi complex was on fire and that
more radiation was released
Prime Minister Naoto Kan warned that there are
dangers of more leaks and told people living within 19 miles (30
kilometers) of the Fukushima Dai-ichi complex stay indoors.
THIS IS A BREAKING NEWS UPDATE. Check back soon for further information. AP's earlier story is below.
TOKYO (AP) — Japan's nuclear safety agency said an
explosion Tuesday at an earthquake-damaged nuclear power plant may have
damaged a reactor's containment vessel and that a radiation leak is
feared.
The nuclear core of Unit 2 of the Fukushima Dai-ichi
nuclear plant in northeast Japan was undamaged, said a spokesman for the
Nuclear and Industrial Safety Agency, Shigekazu Omukai.
The agency suspects the explosion early Tuesday may
have damaged the reactor's suppression chamber, a water-filled tube at
the bottom of the container that surrounds the nuclear core, said
another agency spokesman, Shinji Kinjo. He said that chamber is part of
the container wall, so damage to it could allow radiation to escape.
"A leak of nuclear material is feared," said another
agency spokesman, Shinji Kinjo. He said the agency had no details of
possible damage to the chamber.
Radiation levels measured at the front gate of the Dai-ichi plant spiked following Tuesday's explosion, Kinjo said.
Detectors showed 11,900 microsieverts of radiation
three hours after the blast, up from just 73 microsieverts beforehand,
Kinjo said. He said there was no immediate health risk because the
higher measurement was less radiation that a person receives from an
X-ray. He said experts would worry about health risks if levels exceed
100,000 microsievertsReference:
Labels: debian,windows,centos,tool
news
Monday, February 21, 2011
Easy Automated Snapshot-Style Backups with Linux and Rsync
This document describes a method for generating automatic rotating
"snapshot"-style backups on a Unix-based system, with specific examples
drawn from the author's GNU/Linux experience. Snapshot backups are a
feature of some high-end industrial file servers; they create the
illusion of multiple, full backups per day without the space or
processing overhead. All of the snapshots are read-only, and are accessible
directly by users as special system directories. It is often possible to
store several hours, days, and even weeks' worth of snapshots with slightly
more than 2x storage. This method, while not as space-efficient as some of
the proprietary technologies (which, using special copy-on-write
filesystems, can operate on slightly more than 1x storage), makes use of
only standard file utilities and the common rsync program, which is installed by
default on most Linux distributions. Properly configured, the method can
also protect against hard disk failure, root compromises, or even back up a
network of heterogeneous desktops automatically.
Motivation
Note: what follows is the original sgvlug DEVSIG announcement.
Ever accidentally delete or overwrite a file you were working on? Ever lose
data due to hard-disk failure? Or maybe you export shares to your
windows-using friends--who proceed to get outlook viruses that twiddle a
digit or two in all of their .xls files. Wouldn't it be nice if there were
a
/snapshot
directory that you could go back to, which had complete
images of the file system at semi-hourly intervals all day, then daily
snapshots back a few days, and maybe a weekly snapshot too? What if every
user could just go into that magical directory and copy deleted or
overwritten files back into "reality", from the snapshot of choice, without
any help from you? And what if that /snapshot
directory were
read-only, like a CD-ROM, so that nothing could touch it (except maybe root,
but even then not directly)?
Best of all, what if you could make all of that happen automatically, using
only one extra, slightly-larger, hard disk? (Or one extra
partition, which would protect against all of the above except disk
failure).
In my lab, we have a proprietary NetApp file server which provides that sort of
functionality to the end-users. It provides a lot of other things too, but
it cost as much as a luxury SUV. It's quite appropriate for our heavy-use
research lab, but it would be overkill for a home or small-office
environment. But that doesn't mean small-time users have to do without!
I'll show you how I configured automatic, rotating snapshots on my $80 used
Linux desktop machine (which is also a file, web, and mail server) using
only a couple of one-page scripts and a few standard Linux utilities that
you probably already have.
I'll also propose a related strategy which employs one (or two, for the
wisely paranoid) extra low-end machines for a complete, responsible,
automated backup strategy that eliminates tapes and manual labor and makes
restoring files as easy as "cp".
Using rsync
to make a backup
The
rsync
utility is a very well-known piece of GPL'd software,
written originally by Andrew Tridgell and Paul Mackerras. If you have a
common Linux or UNIX variant, then you probably already have it installed;
if not, you can download the source code from rsync.samba.org. Rsync's specialty is
efficiently synchronizing file trees across a network, but it works fine on
a single machine too.
Basics
Suppose you have a directory called
source
, and you want to
back it up into the directory destination
. To accomplish that,
you'd use:
rsync -a source/ destination/
(Note: I usually also add the
-v
(verbose) flag too so that
rsync
tells me what it's doing). This command is equivalent
to:
cp -a source/. destination/
except that it's much more efficient if there are only a few differences.
Just to whet your appetite, here's a way to do the same thing as in the
example above, but with
destination
on a remote machine, over a
secure shell:
rsync -a -e ssh source/ username@remotemachine.com:/path/to/destination/
Trailing Slashes Do Matter...Sometimes
This isn't really an article about
rsync
, but I would like to
take a momentary detour to clarify one potentially confusing detail about
its use. You may be accustomed to commands that don't care about trailing
slashes. For example, if a
and b
are two
directories, then cp -a a b
is equivalent to cp -a a/
b/
. However, rsync
does care about the
trailing slash, but only on the source argument. For example, let
a
and b
be two directories, with the file
foo
initially inside directory a
. Then this
command:
rsync -a a b
produces
b/a/foo
, whereas this command:
rsync -a a/ b
produces
b/foo
. The presence or absence of a trailing slash on
the destination argument (b
, in this case) has no effect.
Using the --delete
flag
If a file was originally in both
source/
and
destination/
(from an earlier rsync
, for example),
and you delete it from source/
, you probably want it to be
deleted from destination/
on the next rsync
.
However, the default behavior is to leave the copy at
destination/
in place. Assuming you want rsync
to
delete any file from destination/
that is not in
source/
, you'll need to use the --delete
flag:
rsync -a --delete source/ destination/
Be lazy: use cron
One of the toughest obstacles to a good backup strategy is human nature; if
there's any work involved, there's a good chance backups won't happen.
(Witness, for example, how rarely my roommate's home PC was backed up before
I created this system). Fortunately, there's a way to harness human
laziness: make
cron
do the work.
To run the rsync-with-backup command from the previous section every morning
at 4:20 AM, for example, edit the root
cron
table: (as root)
crontab -e
Then add the following line:
20 4 * * * rsync -a --delete source/ destination/
Finally, save the file and exit. The backup will happen every morning at
precisely 4:20 AM, and root will receive the output by email. Don't copy
that example verbatim, though; you should use full path names (such as
/usr/bin/rsync
and /home/source/
) to remove any
ambiguity.
Incremental backups with rsync
Since making a full copy of a large filesystem can be a time-consuming and
expensive process, it is common to make full backups only once a week or
once a month, and store only changes on the other days. These are
called "incremental" backups, and are supported by the venerable old
dump
and tar
utilities, along with many others.
However, you don't have to use tape as your backup medium; it is both
possible and vastly more efficient to perform incremental backups with
rsync
.
The most common way to do this is by using the
rsync -b
--backup-dir=
combination. I have seen examples of that usage here, but I won't
discuss it further, because there is a better way. If you're not familiar
with hard links, though, you should first start with the following review.
Review of hard links
We usually think of a file's name as being the file itself, but really the
name is a hard link. A given file can have more than one hard link to
itself--for example, a directory has at least two hard links: the directory
name and
.
(for when you're inside it). It also has one hard
link from each of its sub-directories (the ..
file inside each
one). If you have the stat
utility installed on your machine,
you can find out how many hard links a file has (along with a bunch of other
information) with the command:
stat filename
Hard links aren't just for directories--you can create more than one link to
a regular file too. For example, if you have the file
a
, you
can make a link called b
:
ln a b
Now,
a
and b
are two names for the same file, as
you can verify by seeing that they reside at the same inode (the inode
number will be different on your machine):
ls -i a
232177 a
ls -i b
232177 b
So
ln a b
is roughly equivalent to cp a b
, but
there are several important differences:
- The contents of the file are only stored once, so you don't use twice the space.
- If you change
a
, you're changingb
, and vice-versa. - If you change the permissions or ownership of
a
, you're changing those ofb
as well, and vice-versa. - If you overwrite
a
by copying a third file on top of it, you will also overwriteb
, unless you tellcp
to unlink before overwriting. You do this by runningcp
with the--remove-destination
flag. Notice thatrsync
always unlinks before overwriting!!. Note, added 2002.Apr.10: the previous statement applies to changes in the file contents only, not permissions or ownership.
But this raises an interesting question. What happens if you
rm
one of the links? The answer is that rm
is a
bit of a misnomer; it doesn't really remove a file, it just removes that one
link to it. A file's contents aren't truly removed until the number of
links to it reaches zero. In a moment, we're going to make use of that
fact, but first, here's a word about cp
.
Using cp -al
In the previous section, it was mentioned that hard-linking a file is
similar to copying it. It should come as no surprise, then, that the
standard GNU coreutils
cp
command comes with a -l
flag that causes it to create (hard) links instead of copies (it doesn't
hard-link directories, though, which is good; you might want to think about
why that is). Another handy switch for the cp
command is
-a
(archive), which causes it to recurse through directories
and preserve file owners, timestamps, and access permissions.
Together, the combination
cp -al
makes what appears to
be a full copy of a directory tree, but is really just an illusion that
takes almost no space. If we restrict operations on the copy to adding or
removing (unlinking) files--i.e., never changing one in place--then the
illusion of a full copy is complete. To the end-user, the only differences
are that the illusion-copy takes almost no disk space and almost no time to
generate.
2002.05.15: Portability tip: If you don't have GNU
cp
installed
(if you're using a different flavor of *nix, for example), you can use
find
and cpio
instead. Simply replace
cp -al a b
with
cd a && find . -print | cpio -dpl ../b
. Thanks to
Brage Førland for that tip.
Putting it all together
We can combine
rsync
and cp -al
to create what
appear to be multiple full backups of a filesystem without taking multiple
disks' worth of space. Here's how, in a nutshell:
rm -rf backup.3
mv backup.2 backup.3
mv backup.1 backup.2
cp -al backup.0 backup.1
rsync -a --delete source_directory/ backup.0/
If the above commands are run once every day, then
backup.0
,
backup.1
, backup.2
, and backup.3
source_directory/
as it
appeared today, yesterday, two days ago, and three days ago,
respectively--complete, except that permissions and ownerships in old
snapshots will get their most recent values (thanks to J.W. Schultz for
pointing this out). In reality, the extra storage will be equal to the
current size of source_directory/
plus the total size of the
changes over the last three days--exactly the same space that a full plus
daily incremental backup with dump
or tar
would
have taken.
will
appear to each be a full backup of
Update (2003.04.23): As of
rsync-2.5.6
, the
--link-dest
flag is now standard. Instead of the separate
cp -al
and rsync
lines above, you may now write:
mv backup.0 backup.1
rsync -a --delete --link-dest=../backup.1 source_directory/ backup.0/
This method is preferred, since it preserves original permissions and
ownerships in the backup. However, be sure to test it--as of this writing
some users are still having trouble getting
--link-dest
to work
properly. Make sure you use version 2.5.7 or later.
Update (2003.05.02): John Pelan writes in to suggest recycling the oldest
snapshot instead of recursively removing and then re-creating it. This
should make the process go faster, especially if your file tree is very
large:
mv backup.3 backup.tmp
mv backup.2 backup.3
mv backup.1 backup.2
mv backup.0 backup.1
mv backup.tmp backup.0
cp -al backup.1/. backup.0
rsync -a --delete source_directory/ backup.0/
2003.06.02: OOPS! Rsync's link-dest option does not play well with J.
Pelan's suggestion--the approach I previously had written above will result
in unnecessarily large storage, because old files in backup.0 will get
replaced and not linked. Please only use Dr. Pelan's directory recycling if
you use the separate
cp -al
step; if you plan to use
--link-dest
, start with backup.0 empty and pristine. Apologies
to anyone I've misled on this issue. Thanks to Kevin Everets for pointing
out the discrepancy to me, and to J.W. Schultz for clarifying
--link-dest
's behavior. Also note that I haven't fully tested
the approach written above; if you have, please let me know. Until then,
caveat emptor!
I'm used to dump
or tar
! This seems backward!
The
dump
and tar
utilities were originally
designed to write to tape media, which can only access files in a certain
order. If you're used to their style of incremental backup,
rsync
might seem backward. I hope that the following example
will help make the differences clearer.
Suppose that on a particular system, backups were done on Monday night,
Tuesday night, and Wednesday night, and now it's Thursday.
With
dump
or tar
, the Monday backup is the big
("full") one. It contains everything in the filesystem being backed up. The
Tuesday and Wednesday "incremental" backups would be much smaller, since
they would contain only changes since the previous day. At some point
(presumably next Monday), the administrator would plan to make another full
dump.
With rsync, in contrast, the Wednesday backup is the big one. Indeed, the
"full" backup is always the most recent one. The Tuesday directory
would contain data only for those files that changed between Tuesday and
Wednesday; the Monday directory would contain data for only those files that
changed between Monday and Tuesday.
A little reasoning should convince you that the
rsync
way is
much better for network-based backups, since it's only necessary to
do a full backup once, instead of once per week. Thereafter, only the
changes need to be copied. Unfortunately, you can't rsync to a tape, and
that's probably why the dump
and tar
incremental
backup models are still so popular. But in your author's opinion, these
should never be used for network-based backups now that rsync
is available.
Isolating the backup from the rest of the system
If you take the simple route and keep your backups in another directory on
the same filesystem, then there's a very good chance that whatever damaged
your data will also damage your backups. In this section, we identify a few
simple ways to decrease your risk by keeping the backup data separate.
The easy (bad) way
In the previous section, we treated
/destination/
as if it were
just another directory on the same filesystem. Let's call that the easy
(bad) approach. It works, but it has several serious limitations:
- If your filesystem becomes corrupted, your backups will be corrupted too.
- If you suffer a hardware failure, such as a hard disk crash, it might be very difficult to reconstruct the backups.
- Since backups preserve permissions, your users--and any programs or viruses that they run--will be able to delete files from the backup. That is bad. Backups should be read-only.
- If you run out of free space, the backup process (which runs as root) might crash the system and make it difficult to recover.
- The easy (bad) approach offers no protection if the root account is compromised.
Fortunately, there are several easy ways to make your backup more robust.
Keep it on a separate partition
If your backup directory is on a separate partition, then any corruption in
the main filesystem will not normally affect the backup. If the backup
process runs out of disk space, it will fail, but it won't take the rest of
the system down too. More importantly, keeping your backups on a separate
partition means you can keep them mounted read-only; we'll discuss that in
more detail in the next chapter.
Keep that partition on a separate disk
If your backup partition is on a separate hard disk, then you're also
protected from hardware failure. That's very important, since hard disks
always fail eventually, and often take your data with them. An entire
industry has formed to service the needs of those whose broken hard disks
contained important data that was not properly backed up.
Important: Notice, however, that in the event of hardware
failure you'll still lose any changes made since the last backup. For home
or small office users, where backups are made daily or even hourly as
described in this document, that's probably fine, but in situations where
any data loss at all would be a serious problem (such as where financial
transactions are concerned), a RAID system might be more appropriate.
RAID is well-supported under Linux, and the methods described in this
document can also be used to create rotating snapshots of a RAID system.
Keep that disk on a separate machine
If you have a spare machine, even a very low-end one, you can turn it into a
dedicated backup server. Make it standalone, and keep it in a physically
separate place--another room or even another building. Disable every single
remote service on the backup server, and connect it only to a dedicated
network interface on the source machine.
On the source machine, export the directories that you want to back up via
read-only NFS to the dedicated interface. The backup server can mount the
exported network directories and run the snapshot routines discussed in this
article as if they were local. If you opt for this approach, you'll only be
remotely vulnerable if:
- a remote root hole is discovered in read-only NFS, and
- the source machine has already been compromised.
I'd consider this "pretty good" protection, but if you're (wisely) paranoid,
or your job is on the line, build two backup servers. Then you can make
sure that at least one of them is always offline.
If you're using a remote backup server and can't get a dedicated line to it
(especially if the information has to cross somewhere insecure, like the
public internet), you should probably skip the NFS approach and use
rsync -e ssh
instead.
It has been pointed out to me that
rsync
operates far more
efficiently in server mode than it does over NFS, so if the connection
between your source and backup server becomes a bottleneck, you should
consider configuring the backup machine as an rsync server instead of using
NFS. On the downside, this approach is slightly less transparent to users
than NFS--snapshots would not appear to be mounted as a system directory,
unless NFS is used in that direction, which is certainly another option (I
haven't tried it yet though). Thanks to Martin Pool, a lead developer of
rsync
, for making me aware of this issue.
Here's another example of the utility of this approach--one that I use. If
you have a bunch of windows desktops in a lab or office, an easy way to keep
them all backed up is to share the relevant files, read-only, and mount them
all from a dedicated backup server using SAMBA. The backup job can treat
the SAMBA-mounted shares just like regular local directories.
Making the backup as read-only as possible
In the previous section, we discussed ways to keep your backup data
physically separate from the data they're backing up. In this section, we
discuss the other side of that coin--preventing user processes from
modifying backups once they're made.
We want to avoid leaving the
snapshot
backup directory mounted
read-write in a public place. Unfortunately, keeping it mounted read-only
the whole time won't work either--the backup process itself needs write
access. The ideal situation would be for the backups to be mounted
read-only in a public place, but at the same time, read-write in a private
directory accessible only by root, such as /root/snapshot
.
There are a number of possible approaches to the challenge presented by
mounting the backups read-only. After some amount of thought, I found a
solution which allows root to write the backups to the directory but only
gives the users read permissions. I'll first explain the other ideas I had
and why they were less satisfactory.
It's tempting to keep your backup partition mounted read-only as
/snapshot
most of the time, but unmount that and remount it
read-write as /root/snapshot
during the brief periods while
snapshots are being made. Don't give in to temptation!.
Bad: mount
/umount
A filesystem cannot be unmounted if it's busy--that is, if some process is
using it. The offending process need not be owned by root to block an
unmount request. So if you plan to
umount
the read-only copy
of the backup and mount
it read-write somewhere else,
don't--any user can accidentally (or deliberately) prevent the backup from
happening. Besides, even if blocking unmounts were not an issue, this
approach would introduce brief intervals during which the backups would seem
to vanish, which could be confusing to users.
Better: mount
read-only most of the time
A better but still-not-quite-satisfactory choice is to remount the directory
read-write in place:
mount -o remount,rw /snapshot
[ run backup process ]
mount -o remount,ro /snapshot
Now any process that happens to be in
/snapshot
when the
backups start will not prevent them from happening. Unfortunately, this
approach introduces a new problem--there is a brief window of vulnerability,
while the backups are being made, during which a user process could write to
the backup directory. Moreover, if any process opens a backup file for
writing during that window, it will prevent the backup from being remounted
read-only, and the backups will stay vulnerable indefinitely.
Tempting but doesn't seem to work: the 2.4 kernel's mount --bind
Starting with the 2.4-series Linux kernels, it has been possible to mount a
filesystem simultaneously in two different places. "Aha!" you might think,
as I did. "Then surely we can mount the backups read-only in
/snapshot
, and read-write in /root/snapshot
at the
same time!"
Alas, no. Say your backups are on the partition
/dev/hdb1
. If
you run the following commands,
mount /dev/hdb1 /root/snapshot
mount --bind -o ro /root/snapshot /snapshot
then (at least as of the 2.4.9 Linux kernel--updated, still present in the
2.4.20 kernel),
mount
/dev/hdb1
as being
mounted read-write in /root/snapshot
and read-only in
/snapshot
, just as you requested. Don't let the system mislead
you!
will report
It seems that, at least on my system, read-write vs. read-only is a property
of the filesystem, not the mount point. So every time you change the mount
status, it will affect the status at every point the filesystem is mounted,
even though neither
/etc/mtab
nor /proc/mounts
will indicate the change.
In the example above, the second
mount
call will cause both of
the mounts to become read-only, and the backup process will be unable to
run. Scratch this one.
Update: I have it on fairly good authority that this behavior is considered
a bug in the Linux kernel, which will be fixed as soon as someone gets
around to it. If you are a kernel maintainer and know more about this
issue, or are willing to fix it, I'd love to hear from you!
My solution: using NFS on localhost
This is a bit more complicated, but until Linux supports
mount
--bind
with different access permissions in different places, it
seems like the best choice. Mount the partition where backups are stored
somewhere accessible only by root, such as /root/snapshot
.
Then export it, read-only, via NFS, but only to the same machine. That's as
simple as adding the following line to /etc/exports
:
/root/snapshot 127.0.0.1(secure,ro,no_root_squash)
then start
nfs
and portmap
from
/etc/rc.d/init.d/
. Finally mount the exported directory,
read-only, as /snapshot
:
mount -o ro 127.0.0.1:/root/snapshot /snapshot
And verify that it all worked:
mount
...
/dev/hdb1 on /root/snapshot type ext3 (rw)
127.0.0.1:/root/snapshot on /snapshot type nfs (ro,addr=127.0.0.1)
At this point, we'll have the desired effect: only root will be able to
write to the backup (by accessing it through
/root/snapshot
).
Other users will see only the read-only /snapshot
/root/snapshot
directory.
For a little extra protection, you could keep mounted read-only in
most of the time, and only remount it read-write
while backups are happening.
Damian Menscher pointed out this CERT advisory
which specifically recommends against NFS exporting to localhost,
though since I'm not clear on why it's a problem, I'm not sure whether
exporting the backups read-only as we do here is also a problem. If you
understand the rationale behind this advisory and can shed light on it,
would you please contact me? Thanks!
Extensions: hourly, daily, and weekly snapshots
With a little bit of tweaking, we make multiple-level rotating snapshots.
On my system, for example, I keep the last four "hourly" snapshots (which
are taken every four hours) as well as the last three "daily" snapshots
(which are taken at midnight every day). You might also want to keep
weekly or even monthly snapshots too, depending upon your needs and your
available space.
Keep an extra script for each level
This is probably the easiest way to do it. I keep one script that runs
every four hours to make and rotate hourly snapshots, and another script
that runs once a day rotate the daily snapshots. There is no need to use
rsync for the higher-level snapshots; just cp -al from the appropriate
hourly one.
Run it all with cron
To make the automatic snapshots happen, I have added the following lines to
root's
crontab
file:
0 */4 * * * /usr/local/bin/make_snapshot.sh
0 13 * * * /usr/local/bin/daily_snapshot_rotate.sh
They cause
make_snapshot.sh
to be run every four hours on the
hour and daily_snapshot_rotate.sh
to be run every day at 13:00
(that is, 1:00 PM). I have included those scripts in the appendix.
If you tire of receiving an email from the
cron
process every
four hours with the details of what was backed up, you can tell it to send
the output of make_snapshot.sh
to /dev/null
, like
so:
0 */4 * * * /usr/local/bin/make_snapshot.sh >/dev/null 2>&1
Understand, though, that this will prevent you from seeing errors if
make_snapshot.sh
cannot run for some reason, so be careful with
it. Creating a third script to check for any unusual behavior in the
snapshot periodically seems like a good idea, but I haven't implemented it
yet. Alternatively, it might make sense to log the output of each run, by
piping it through tee
, for example. mRgOBLIN wrote in to
suggest a better (and obvious, in retrospect!) approach, which is to send
stdout to /dev/null but keep stderr, like so:
0 */4 * * * /usr/local/bin/make_snapshot.sh >/dev/null
Presto! Now you only get mail when there's an error. :)
Appendix: my actual configuration
I know that listing my actual backup configuration here is a security risk;
please be kind and don't use this information to crack my site. However,
I'm not a security expert, so if you see any vulnerabilities in my setup,
I'd greatly appreciate your help in fixing them. Thanks!
I actually use two scripts, one for every-four-hours (hourly) snapshots, and
one for every-day (daily) snapshots. I am only including the parts of the
scripts that relate to backing up
/home
, since those are
relevant ones here.
I use the NFS-to-localhost trick of exporting
/root/snapshot
read-only as /snapshot
, as discussed above.
The system has been running without a hitch for months.
Listing one: make_snapshot.sh
#!/bin/bash
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility
# ----------------------------------------------------------------------
# this needs to be a lot more general, but the basic idea is it makes
# rotating backup-snapshots of /home whenever called
# ----------------------------------------------------------------------
unset PATH # suggestion from H. Milz: avoid accidental use of $PATH
# ------------- system commands used by this script --------------------
ID=/usr/bin/id;
ECHO=/bin/echo;
MOUNT=/bin/mount;
RM=/bin/rm;
MV=/bin/mv;
CP=/bin/cp;
TOUCH=/bin/touch;
RSYNC=/usr/bin/rsync;
# ------------- file locations -----------------------------------------
MOUNT_DEVICE=/dev/hdb1;
SNAPSHOT_RW=/root/snapshot;
EXCLUDES=/usr/local/etc/backup_exclude;
# ------------- the script itself --------------------------------------
# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi
# attempt to remount the RW mount point as RW; else abort
$MOUNT -o remount,rw $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
exit;
}
fi;
# rotating snapshots of /home (fixme: this should be more general)
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/home/hourly.3 ] ; then \
$RM -rf $SNAPSHOT_RW/home/hourly.3 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/home/hourly.2 ] ; then \
$MV $SNAPSHOT_RW/home/hourly.2 $SNAPSHOT_RW/home/hourly.3 ; \
fi;
if [ -d $SNAPSHOT_RW/home/hourly.1 ] ; then \
$MV $SNAPSHOT_RW/home/hourly.1 $SNAPSHOT_RW/home/hourly.2 ; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of the latest snapshot,
# if that exists
if [ -d $SNAPSHOT_RW/home/hourly.0 ] ; then \
$CP -al $SNAPSHOT_RW/home/hourly.0 $SNAPSHOT_RW/home/hourly.1 ; \
fi;
# step 4: rsync from the system into the latest snapshot (notice that
# rsync behaves like cp --remove-destination by default, so the destination
# is unlinked first. If it were not so, this would copy over the other
# snapshot(s) too!
$RSYNC \
-va --delete --delete-excluded \
--exclude-from="$EXCLUDES" \
/home/ $SNAPSHOT_RW/home/hourly.0 ;
# step 5: update the mtime of hourly.0 to reflect the snapshot time
$TOUCH $SNAPSHOT_RW/home/hourly.0 ;
# and thats it for home.
# now remount the RW snapshot mountpoint as readonly
$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
exit;
} fi;
As you might have noticed above, I have added an excludes list to the
rsync
call. This is just to prevent the system from backing up
garbage like web browser caches, which change frequently (so they'd take up
space in every snapshot) but would be no loss if they were accidentally
destroyed.
Listing two: daily_snapshot_rotate.sh
#!/bin/bash
# ----------------------------------------------------------------------
# mikes handy rotating-filesystem-snapshot utility: daily snapshots
# ----------------------------------------------------------------------
# intended to be run daily as a cron job when hourly.3 contains the
# midnight (or whenever you want) snapshot; say, 13:00 for 4-hour snapshots.
# ----------------------------------------------------------------------
unset PATH
# ------------- system commands used by this script --------------------
ID=/usr/bin/id;
ECHO=/bin/echo;
MOUNT=/bin/mount;
RM=/bin/rm;
MV=/bin/mv;
CP=/bin/cp;
# ------------- file locations -----------------------------------------
MOUNT_DEVICE=/dev/hdb1;
SNAPSHOT_RW=/root/snapshot;
# ------------- the script itself --------------------------------------
# make sure we're running as root
if (( `$ID -u` != 0 )); then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi
# attempt to remount the RW mount point as RW; else abort
$MOUNT -o remount,rw $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readwrite";
exit;
}
fi;
# step 1: delete the oldest snapshot, if it exists:
if [ -d $SNAPSHOT_RW/home/daily.2 ] ; then \
$RM -rf $SNAPSHOT_RW/home/daily.2 ; \
fi ;
# step 2: shift the middle snapshots(s) back by one, if they exist
if [ -d $SNAPSHOT_RW/home/daily.1 ] ; then \
$MV $SNAPSHOT_RW/home/daily.1 $SNAPSHOT_RW/home/daily.2 ; \
fi;
if [ -d $SNAPSHOT_RW/home/daily.0 ] ; then \
$MV $SNAPSHOT_RW/home/daily.0 $SNAPSHOT_RW/home/daily.1; \
fi;
# step 3: make a hard-link-only (except for dirs) copy of
# hourly.3, assuming that exists, into daily.0
if [ -d $SNAPSHOT_RW/home/hourly.3 ] ; then \
$CP -al $SNAPSHOT_RW/home/hourly.3 $SNAPSHOT_RW/home/daily.0 ; \
fi;
# note: do *not* update the mtime of daily.0; it will reflect
# when hourly.3 was made, which should be correct.
# now remount the RW snapshot mountpoint as readonly
$MOUNT -o remount,ro $MOUNT_DEVICE $SNAPSHOT_RW ;
if (( $? )); then
{
$ECHO "snapshot: could not remount $SNAPSHOT_RW readonly";
exit;
} fi;
Sample output of ls -l /snapshot/home
total 28
drwxr-xr-x 12 root root 4096 Mar 28 00:00 daily.0
drwxr-xr-x 12 root root 4096 Mar 27 00:00 daily.1
drwxr-xr-x 12 root root 4096 Mar 26 00:00 daily.2
drwxr-xr-x 12 root root 4096 Mar 28 16:00 hourly.0
drwxr-xr-x 12 root root 4096 Mar 28 12:00 hourly.1
drwxr-xr-x 12 root root 4096 Mar 28 08:00 hourly.2
drwxr-xr-x 12 root root 4096 Mar 28 04:00 hourly.3
Notice that the contents of each of the subdirectories of
/snapshot/home/
is a complete image of /home
at
the time the snapshot was made. Despite the w
in the directory
access permissions, no one--not even root--can write to this directory; it's
mounted read-only.
Bugs
Maintaining Permissions and Owners in the snapshots
The snapshot system above does not properly maintain old
ownerships/permissions; if a file's ownership or permissions are changed in
place, then the new ownership/permissions will apply to older snapshots as
well. This is because
rsync
does not unlink files prior to
changing them if the only changes are ownership/permission. Thanks to J.W.
Schultz for pointing this out. Using his new --link-dest
option, it is now trivial to work around this problem. See the discussion in
the Putting it all together section of Incremental backups with rsync
, above.
mv
updates timestamp bug
Apparently, a bug in some Linux kernels between 2.4.4 and 2.4.9 causes
mv
to update timestamps; this may result in inaccurate
timestamps on the snapshot directories. Thanks to Claude Felizardo for
pointing this problem out. He was able to work around the problem my
replacing mv
with the following script:
MV=my_mv;
...
function my_mv() {
REF=/tmp/makesnapshot-mymv-$$;
touch -r $1 $REF;
/bin/mv $1 $2;
touch -r $REF $2;
/bin/rm $REF;
}
Windows-related problems
I have recently received a few reports of what appear to be interaction
issues between Windows and rsync.
One report came from a user who mounts a windows share via Samba, much as I
do, and had files mysteriously being deleted from the backup even when they
weren't deleted from the source. Tim Burt also used this technique, and was
seeing files copied even when they hadn't changed. He determined that the
problem was modification time precision; adding --modify-window=10 caused
rsync to behave correctly in both cases. If you are rsync'ing from a
SAMBA share, you must add --modify-window=10 or you may get
inconsistent results. Update: --modify-window=1 should be sufficient.
Yet another update: the problem appears to still be there. Please let me
know if you use this method and files which should not be deleted are
deleted.
Also, for those who use rsync directly on cygwin, there are some known
problems, apparently related to cygwin signal handling. Scott Evans reports
that rsync sometimes hangs on large directories. Jim Kleckner informed me
of an rsync patch, discussed here and here,
which seems to work around this problem. I have several reports of this
working, and two reports of it not working (the hangs continue). However,
one of the users who reported a negative outcome, Greg Boyington, was able
to get it working using Craig Barrett's suggested sleep() approach, which is
documented here.
Memory use in rsync scales linearly with the number of files being sync'd.
This is a problem when syncing large file trees, especially when the server
involved does not have a lot of RAM. If this limitation is more of an issue
to you than network speed (for example, if you copy over a LAN), you may
wish to use mirrordir
instead. I haven't tried it personally, but it looks promising. Thanks to
Vladimir Vuksan for this tip!
Labels: debian,windows,centos,tool
rsync
Subscribe to:
Posts (Atom)