Sunday, October 7, 2012

Ugly-looking gtk3 apps

Often times, using of themes which do not have GTK3 styling in KDE or lightweight desktop environments like Xfce, Openbox and Lxde results in the latest GTK applications like gedit, transmission and evince looking unstyled and ugly.

Here's how my gedit looks like in my Openbox when I use my 'xfce-4.0' gtk2 theme:


So, just copy the default Adwaita theme to your home directory with something like

cp -r /usr/share/themes/Adwaita/gtk-3.0 ~/.config/

After that, edit your ~/.config/gtk-3.0/settings.ini file and insert just the following contents.

[Settings]
gtk-theme-name = Adwaita
gtk-icon-theme-name = fs-icons-ubuntu
gtk-fallback-icon-theme = fs-icons-ubuntu

# next option is applicable only if selected theme supports it
gtk-application-prefer-dark-theme = false

# set font name and dimension
gtk-font-name = Sans 8
gtk-toolbar-style=GTK_TOOLBAR_ICONS
gtk-toolbar-icon-size=GTK_ICON_SIZE_MENU

Add voila, here's how it looks afterwards, (pardon the change in colorscheme of gedit)


Wednesday, August 22, 2012

Bypass proxy script for Pacman

A few months back I had written a small bash script to bypass the download limit (14MB) imposed by our proxy server. What it essentially does is, downloads files chunk-wise; for instance, in my case, it'd download 14MB chunks at a time and appends it to the file... all on-the-fly! It served me pretty well all these days. Moreover, I also wanted to use it as my Pacman's XferCommand to stay up-to-date.

Though the original version works, it needs a few tweaks. For bigger files, output doesn't look streamlined. Like this...


I removed all unwanted code like i) filename guessing from URLs and ii) output directory validations. And beautified/simplified it's output to look like the below screenshot. To use it with pacman, go to /etc/pacman.conf and add a XferCommand like shown in the top pane of the below screenshot. That's it, now it works well with pacman too...





Here's the new code... It's also on github

#!/bin/bash
#
# Vikas Reddy @
# http://vikas-reddy.blogspot.in/2012/04/bypass-proxy-servers-file-size-download.html
#
#

# Erase the current line in stdout
erase_line() {
    echo -ne '\r\033[K'
}

# Asynchronously (as a different process) display the filesize continously
# (once per second).
# * Contains an infinite loop; runs as long as this script is active
# * Takes one argument, the total filepath
async_display_size() {
    PARENT_PID=$BASHPID
    {
        # Run until this script ends
        until [[ -z "$(ps x | grep -E "^\s*$PARENT_PID")" ]]; do
            # Redraw the `du` line every second
            erase_line
            echo -n "$(du -sh "$1") "
            sleep 1
        done
    }&
    updater_pid="$!"
}

# Defaults
fsize_limit=$((14*1024*1024)) # 14MB
user_agent="Firefox/20.0"


# Command-line options
while getopts 'f:d:u:y' opt "$@"; do
    case "$opt" in
        f) filepath="$OPTARG"    ;;
        u) user_agent="$OPTARG"  ;;
    esac
done
shift $((OPTIND - 1))

# Exit if no URL or filepath argument is provided
if [[ $# -eq 0 ]] || [[ -z "$filepath" ]]; then
    exit
fi

# Only one argument, please!
url="$1"

# Create/truncate the output file
truncate --size 0 "$filepath"


# Asynchronously (as a different process) start displaying the filesize
# even before the download is started!
async_display_size "$filepath"

# infinite loop, until the file is fully downloaded
for (( i=1; 1; i++ )); do

    # setting the range
    [ $i -eq 1 ] && start=0 || start=$(( $fsize_limit * ($i - 1) + 1))
    stop=$(( $fsize_limit * i ))

    # downloading
    curl --fail \
         --location \
         --user-agent "$user_agent" \
         --range "$start"-"$stop" \
         "$url" >> "$filepath" 2> /dev/null; # No progress bars and error msgs, please!

    # catching the exit status
    exit_status="$?"

    if [[ $exit_status -eq 22 ]] || [[ $exit_status -eq 36 ]]; then
        # Download finished
        erase_line
        echo "$(du -sh "$filepath")... done!"
        break
    elif [[ $exit_status -gt 0 ]]; then
        # Unknown exit status! Something has gone wrong
        erase_line
        echo "Unknown exit status: $exit_status. Aborting..."
        break
    fi

done


This can also be used as a standalone script to download files normally. Just that you need to mention the full filepath of where you want to download. Like this...

./pacman-curl.sh -f ~/downloads/video.mp4 'http://the-whole-url/'




Thursday, August 16, 2012

TMUX Cheatsheet

TMUX is a fitting competitor to GNU Screen. Here's a cheatsheet of most keybindings. I lifted it straight from its man page.

TMUX Cheatsheet.docx
TMUX Cheatsheet.pdf

Monday, August 13, 2012

VIM Molokai colorscheme

Here’s a screenshot of my gVim…
  • Colorscheme: molokai
  • Font: Monaco 10pt
  • OS: Archlinux
  • Resolution: 1280x1024

Thursday, June 28, 2012

Archlinux: Compile ffmpeg with nonfree codecs

The ffmpeg binaries that are present in the official repository of Archlinux are not compiled with nonfree codecs. Some excellent ones like libfaac, libx264 and lib-nonfree are not included by default. However, with AUR and its package building automation tool yaourt, one can easily accomplish this. Here are the steps...
  1. Install yaourt using pacman -S yaourt, if you haven't already.
  2. Key in yaourt -Sb ffmpeg, and answer 'Yes' when asked for confirmation. Remember, you'd have to edit PKGBUILD when asked for. Add --enable-nonfree and --enable-libfaac and other needed configuration options to the build() function, exit the editor, and proceed.
  3. That's it, get yourself a cup of tea while the compilation process to finishes, and you're good to go.
PKGBUILD in vim

For a list of available configuration options, see http://git.videolan.org/?p=ffmpeg.git;a=blob;f=configure;h=f30998b37c80ff83a81f4949ea8ab0cfb8043376;hb=HEAD

Wednesday, June 27, 2012

Wallpaperswide download script in Windows

A few of the visitors of my blog found it difficult to run my wallpapers download script (http://vikas-reddy.blogspot.in/2012/01/script-to-download-wallpapers-from.html) in their Windows OSes. So, here's what I did to run it in my Win XP Pro SP2 (32-bit).
  1. Install Ruby 1.9.3-p194 from  http://rubyinstaller.org/downloads/. Normally the defaults are fine. I installed it to my "C:\" folder
  2. Install nokogiri gem using "C:\Ruby193\bin\gem" install nokogiri
  3. Install wget from  http://gnuwin32.sourceforge.net/packages/wget.htm. Keep the defaults, or else, change the "wget" line in the script to reflect the correct executable path.
  4. Download the following code into a file named wallpaperswide-script.rb, and adjust Resolution and OutputDirectory variables. It's 1600x900 and "C:\Wallpapers" respectively by default. Check out the original post for available resolutions.
  5. Run it using "C:\Ruby193\bin\ruby.exe" "C:\wallpaperswide-script.rb"
  6. That's it, all your wallpapers will get downloaded into the given folder.
require 'open-uri'
require 'nokogiri'

Resolution = "1600x900"
Base_URL   = "http://wallpaperswide.com/#{Resolution}-wallpapers-r/page/"
Output_Directory = "C:\\Wallpapers"


(1..2805).each do |page_num|

  # Go page by page
  url = Base_URL + page_num.to_s

  # Parse html
  f = open(url)
  doc = Nokogiri::HTML(f)

  # Loop over image-boxes
  doc.css("div.thumb").each do |wallp|

    # Extract wallpaper subpage url
    wallp.css("div[onclick]").attr("onclick").value =~ /prevframe_show\('(.*)'\)/
    subpage_url = $1
    subpage_url =~ %r|http://wallpaperswide\.com/[^/]+/([\w\d]+)\.html|

    # Generate url of the required wallpaper
    wallp_url = %|http://wallpaperswide.com/download/#{$1}-#{Resolution}.jpg|
    
    # Download... with a user-agent parameter just in case...
    # use '--limit-rate=100k' to limit download speed
    system(%|"C:\\Program Files\\GnuWin32\\bin\\wget.exe" -c -U "Firefox/4.5.6" -P "#{Output_Directory}" "#{wallp_url}"|)

  end
end
I'm continually working on it. You can find the latest version of the code on my github account.


The below listed screenshots will guide you through the process...




Tuesday, June 26, 2012

Convert videos using ffmpeg to watch in Nokia 5230 / 5233 / 5800 / 5530 / X6 / N97

Nokia 5230/5800/.../X6 are (err... were) great smartphones. At least before the advent of the now-ubiquitous Android/Windows ones. They have got excellent, high resolution (640x360) screens which are great for watching videos on the go.

ffmpeg too has become the de facto encoding suite for converting videos, at least on linux. Here are the ffmpeg commands used to convert videos to a format which the aforementioned devices play without a hiccup. My 5230 plays MPEG4 videos encoded with lavc/xvid upto resolution 640x360. However, it'd play videos encoded with the advanced H264 codec only upto 320x240.

# ffmpeg libxvid
 ffmpeg -i Input-Filename.avi -f mp4 -y \
   -vcodec libxvid -b:v 600k -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
   -r 25 -s 640x272 -aspect 640:360 -vf pad=640:360:0:44 \
   -threads 2 -async 1 -pass 1 /dev/null
 ffmpeg -i Input-Filename.avi -f mp4 \
   -y -vcodec libxvid -b:v 600k -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
   -r 25 -s 640x272 -aspect 640:360 -vf pad=640:360:0:44 \
   -threads 2 -async 1 -pass 2 ./Input-Filename-ffmpeg.mp4

# ffmpeg libx264
 ffmpeg -i Input-Filename.avi -f mp4 -y \
   -vcodec libx264 -b:v 600k -vpre ipod320 -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
   -r 25 -s 320x196 -aspect 320:240 -vf pad=320:240:0:22 \
   -threads 2 -async 1 -pass 1 /dev/null
 ffmpeg -i Input-Filename.avi -f mp4 -y \
   -vcodec libx264 -b:v 600k -vpre ipod320 -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
   -r 25 -s 320x196 -aspect 320:240 -vf pad=320:240:0:22 \
   -threads 2 -async 1 -pass 2 ./Input-Filename-ffmpeg.mp4

However, for batch processing and automation, a handy Bash shell script would be great. This is a small script I use to convert my videos. (Note: I'm continually working on it. The latest code will be on my github account)
#!/bin/bash
#
#    Vikas Reddy @ http://vikas-reddy.blogspot.com/
#
# ffmpeg libxvid
# --------------
# ffmpeg -i Input-Filename.avi -f mp4 -y \
#   -vcodec libxvid -b:v 600k -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
#   -r 25 -s 640x272 -aspect 640:360 -vf pad=640:360:0:44 \
#   -threads 2 -async 1 -pass 1 /dev/null
# ffmpeg -i Input-Filename.avi -f mp4 \
#   -y -vcodec libxvid -b:v 600k -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
#   -r 25 -s 640x272 -aspect 640:360 -vf pad=640:360:0:44 \
#   -threads 2 -async 1 -pass 2 ./Input-Filename-ffmpeg.mp4
#
# ffmpeg libx264
# --------------
# ffmpeg -i Input-Filename.avi -f mp4 -y \
#   -vcodec libx264 -b:v 600k -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
#   -r 25 -s 320x196 -aspect 320:240 -vf pad=320:240:0:22 \
#   -threads 2 -async 1 -pass 1 /dev/null
# ffmpeg -i Input-Filename.avi -f mp4 -y \
#   -vcodec libx264 -b:v 600k -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
#   -r 25 -s 320x196 -aspect 320:240 -vf pad=320:240:0:22 \
#   -threads 2 -async 1 -pass 2 ./Input-Filename-ffmpeg.mp4
#  
#  Usage
#  -----
#  Command-line options: 
#  -a : Video aspect ratio. Could be either 1.66 or 2.35 (default)
#  -b : Video bitrate. Should be in the form of 600k (default)
#  -c : Video codec. Should be either libx264 or libxvid (default)
#  -d : Output directory. Current directory (.) is the default
#  -y : Whether to ask confirmation before overwriting any file.
#       Should be either "yes" (default) or "no"
#  
#  Examples
#  --------
#  1) ./ffmpeg-encode.sh The.Movie.Filename.avi 
#     would output the xvid-encoded video to The.Movie.Filename-ffmpeg.mp4 in the current directory
#  2) ./ffmpeg-encode.sh -a 1.66 -b 650k -c libx264 -d /home/vikas/downloads/ -y The.Movie.Filename.avi 
#  


# Command-line options
while getopts 'a:b:c:d:o:p:y' opt "$@"; do
    case "$opt" in
        a) video_aspect="$OPTARG" ;;
        b) vbitrate="$OPTARG" ;;
        c) video_codec="$OPTARG" ;;
        d) output_dir="$OPTARG" ;;
        o) addl_options="$OPTARG" ;;
        p) passes="$OPTARG" ;;
        y) ask_confirmation="no" ;;
    esac
done
shift $((OPTIND - 1))


# Defaults
video_aspect="${video_aspect:-2.35}"
video_codec="${video_codec:-libxvid}" # or libx264
vbitrate="${vbitrate:-600k}"
passes="${passes:-2}"
output_dir="${output_dir:-.}"


vpre_pass1=""
vpre_pass2=""

if [[ "$video_codec" == "libx264" ]]; then
    #vpre_pass1="-vpre fastfirstpass -vpre baseline"
    #vpre_pass2="-vpre hq -vpre baseline"
    aspect="320:240"

    if [[ "$video_aspect" == "2.35" ]]; then
        resolution="320x196"
        pad="pad=320:240:0:22"
    elif [[ "$video_aspect" == "1.66" ]]; then
        resolution="320x240"
        pad="pad=320:240:0:0"
    fi;

elif [[ "$video_codec" == "libxvid" ]]; then
    aspect="640:360"

    if [[ "$video_aspect" == "2.35" ]]; then
        resolution="640x272"
        pad="pad=640:360:0:44"
    elif [[ "$video_aspect" == "1.66" ]]; then
        resolution="640x360"
        pad="pad=640:360:0:0"
    fi;
fi;


echo "Encoding '${#@}' video(s)";

for in_file in "$@"; do

    # If the filename has no extension
    if [[ -z "$(echo "$in_file" | grep -Ei "\.[a-z]+$")" ]]; then
        fname="$(basename "${in_file}")-ffmpeg.mp4"
    else
        fname="$(basename "$in_file" | sed -sr 's/^(.*)(\.[^.]+)$/\1-ffmpeg.mp4/')"
    fi
    out_file="${output_dir%/}/${fname}"

    # Avoid overwriting files
    if [[ "$ask_confirmation" != "no" ]] && [[ -f "$out_file" ]]; then
        echo -n "'$out_file' already exists. Do you want to overwrite it? [y/n] "; read response
        [[ -z "$(echo "$response" | grep -i "^y")" ]] && continue
    fi

    # 1st pass
    ffmpeg -i "$in_file" \
           -f mp4 -y $addl_options \
           -vcodec "$video_codec" -b:v "$vbitrate" \
           -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
           -r 25 -s "$resolution" -aspect "$aspect" -vf "$pad" \
           -threads 2 -async 1 -pass 1  \
           "/dev/null"; # $out_file;

    # 2nd pass
    ffmpeg -i "$in_file" \
           -f mp4 -y $addl_options \
           -vcodec "$video_codec" -b:v "$vbitrate" \
           -acodec libfaac -b:a 96k -ac 2 -ar 44100 \
           -r 25 -s "$resolution" -aspect "$aspect" -vf "$pad" \
           -threads 2 -async 1 -pass 2  \
           "$out_file";
done

Its usage is simple too. Without having to remember, edit and type in the lengthy ffmpeg command, this one makes my life a lot easier.
To start with, without any command-line options, the given file is assumed to be of 2.35 (cinema scope) aspect ratio, and consequently encoded using libxvid library to produce a nice 640x360 mp4 video. See below...

A small documentation with available command-line options and a few examples is bundled in the script itself.

NOTE: Because of licensing issues, ffmpeg binaries that are available on the repositories of most of the linux distros are not compiled with "non-free" codec support. This is especially true in the case of libx264 and libfaac. You may have to abandon them and compile the software from sources. Google it!

Do post your views in the comments section below...


Wednesday, May 30, 2012

My Archlinux screenshot

This is a screenshot of my Archlinux installation. I have been using it since 2007. Since it's a rolling distribution, I'd never ever need to re-install it again. And I've ported/copied this installation to many other machines with all the packages/settings/configuration intact.


The desktop consists of:
  1. Xfce 4.10
  2. Conky 1.9.0
  3. XfPanel, for which no screenspace is reserved. This is great for conserving desktop realestate!
  4. Iconset: Faience-Claire
  5. A 1280x800 wallpaper from wallpaperswide.com

Monday, April 30, 2012

Bypass proxy server's file size download limit restriction


   Many organizations and colleges restrict their employees and students respectively from downloading files from the Internet which are larger than a prescribed limit. It is way too low at 14MB where I work. Fret not! There are ways to bypass this. And here is a simple bash script I wrote to download much larger files at my workplace.

Note: This script works only with direct links and with servers which support resume-download functionality.

  I'm continually working on it. So, the latest version will be available on my github account.

How to run it?
  1. Download the following code to a text file named curldownload.sh
  2. Give executable permissions to it chmod +x curldownload.sh
  3. File size limit, fsize_limit variable, is set to 14MB. You may change it to your liking.
  4. The script takes two arguments; the first one being the url of the file to be downloaded; the second one which is optional (defaults to "./") is the output directory.
  5. For ex:- ./curldownload.sh http://ftp.jaist.ac.jp/pub/mozilla.org/firefox/releases/12.0/linux-i686/en-US/firefox-12.0.tar.bz2 "$HOME/Downloads"
  6. A little more complex example of it using multiple urls, and two command-line arguments (-d for output directory, and -u for user-agent http header) is:  ./curl-multi-url.sh -d ~/downloads/ -u "Chromium/18.0" http://ftp.jaist.ac.jp/pub/mozilla.org/firefox/releases/11.0/linux-x86_64/en-US/firefox-11.0.tar.bz2 http://ftp.jaist.ac.jp/pub/mozilla.org/firefox/releases/12.0/linux-i686/en-US/firefox-12.0.tar.bz2
#!/bin/bash
#
# Vikas Reddy @
#   http://vikas-reddy.blogspot.in/2012/04/bypass-proxy-servers-file-size-download.html
#
# 
# Usage:
#     ./curl-multi-url.sh -d OUTPUT_DIRECTORY -u USER_AGENT http://url-1/ http://url-2/;
#     Arguments -d and -u are optional
#
#

# Defaults
fsize_limit=$((14*1024*1024))
user_agent="Firefox/10.0"
output_dir="."


# Command-line options
while getopts 'd:u:' opt "$@"; do
    case "$opt" in
        d) output_dir="$OPTARG";;
        u) user_agent="$OPTARG";;
    esac
done
shift $((OPTIND - 1))


# output directory check
if [ -d "$output_dir" ]; then
    echo "Downloading all files to '$output_dir'"
else
    echo "Target directory '$output_dir' doesn't exist. Aborting..."
    exit 1
fi;


for url in "$@"; do
    filename="$(echo "$url" | sed -r 's|^.*/([^/]+)$|\1|')"
    filepath="$output_dir/$filename"

    # Avoid overwriting the file
    if [[ -f "$filepath" ]]; then
        echo -n "'$filepath' already exists. Do you want to overwrite it? [y/n] "; read response
        [ -z "$(echo "$response" | grep -i "^y")" ] && continue
    else
        cat /dev/null > "$filepath"
    fi

    echo -e "\nDownload of $url started..."
    i=1
    while true; do   # infinite loop, until the file is fully downloaded

        # setting the range
        [ $i -eq 1 ] && start=0 || start=$(( $fsize_limit * ($i - 1) + 1))
        stop=$(( $fsize_limit * i ))

        # downloading
        curl --fail --location --user-agent "$user_agent" --range "$start"-"$stop" "$url" >> "$filepath"

        exit_status="$?"

        # download finished
        [ $exit_status -eq 22 ] && echo -e "Saved $filepath\n" && break

        # other exceptions
        [ $exit_status -gt 0 ] && echo -e "Unknown exit status: $exit_status. Aborting...\n" && break

        i=$(($i + 1))
    done
]]>

Friday, March 30, 2012

Port your GTalk contacts to another account

Porting your GTalk contacts to another GTalk account is not as easy as the export-import feature of Gmail Contacts would have you believe. For, your GTalk accounts are only a subset of your Gmail ones. So, essentially, if you'd export all your contacts to a csv file from your Account #1 and import them to your Account #2, they would get imported but do not appear in your Gtalk.

I've been using Pidgin since many years now and I never knew it'd store all the contacts it imported in an xml file locally, with groups they belong to, their buddy icons, last seen time, etc.,  ~/.purple/blist.xml is where all these are present.

Steps


1. Install Pidgin, create accounts for both your gmail ids.
2. Open your Ruby console/irb and run the following code

# Parse the xml document...
doc = REXML::Document.new(File.read("/home/YOUR_USERNAME/.purple/blist.xml"))

# Initialize empty arrays
contacts_1 = []
contacts_2 = []

# Get all your contacts in your account number 1, assume it as account_1@gmail.com
doc.elements.each('//buddy[starts_with(@account,"account_1@gmail.com")]/name') {|e| contacts_1 << e.text}

# Get all your contacts in your account number 2, assume it as account_2@gmail.com
doc.elements.each('//buddy[starts_with(@account,"account_2@gmail.com")]/name') {|e| contacts_2 << e.text}

# Get the email ids of all contacts which are present in account#2 but not in account#2
# i.e., the contacts who need to be invited
(contacts_2 - contacts_1).join(", ")


3. You'll get some thing like this as output of the last line of code...
4. multiple contacts can be invited at once using Windows GTalk. So, open it and paste this output into the Invite Contacts wizard as shown below

That's it! You have the requests sent to all your invitees from your new account.

Wednesday, February 22, 2012

cp: An amazing swiss knife of GNU

  The other day I was banging my head wondering how to infuse life (read, extra space) into my ageing laptop and into the Arch Linux installation, which, by the way, is turning five this year. The rolling release model and its light-weight packaging has bitten me way back in 2007. I had used my old 80GB, Turion 64x2, 1.5GB DDR2 Presario V3029AU to the maximum, and it's showing.

   Last week, I tried to install Oracle 10g XE on Fedora 16 and it asked for a minimum of 1024MB of swap, a whisker short of what my harddisk actually had. So, I was only left with one dreaded option: to re-format my internal drive and to make space for three Linux distros and a Windows 7 x64 primary partition with sufficient swap space. For that, I had to retain the contents of my Arch. A few options I found after scouting the Internet were:
  1. Use dd to take an as-is backup of the partition to an image file
  2. Use clonezilla to take backup to an image file
  3. Use partimage
  4. Use cp to simple copy all the files onto a separate "linux-compatible" filesystem (read, one which retains the permissions and ownership of the files)
  Obviously, as my main intention was to expand in-place the partition or move files onto a bigger partition, the first option was ruled out. Clonezilla had to be used by burning the live iso  onto a CD or a pen drive. The only sensible option I was left with was to use cp.

cp -ax /source-path/* /destination-path/

   Mind you, I didn't know what all cp could do even after using it for the past seven years on a daily basis. This command simply copies all the files onto the new partition, of course, with all the necessary bells and whistles such as permissions, file ownership and other necessary attributes. Note the two command-line options "-a" and "-x". The latter is to indicate to cp that only one file system needs to be copied. Ignoring this option will end you up with copies of files present on mounted partitions too on your new partition. These two options are absolutely essential for your new partition to work as intended.

  This left me with sufficient space (13GB each) for Arch , LMDE and Fedora 16.

  This scenario reinforces the simplicity of design and everything-is-transparent-to-user philosophy of GNU/Linux. And, cp truly is an amazing swiss knife of it!

Friday, January 13, 2012

A simple command-line Megaupload download manager

Have a bunch of megaupload links and ever felt the need for a simple, lightweight, text-based/command-line and eminently extensible script which does all the downloading for you?

Your search ends here!

Requirements

  1. wget: which is present by default in almost linux distros
  2. ruby and nokogiri: These can be installed as shown here.
Steps
  1. Save the script as megaupload_dm.rb
  2. Make it executable using chmod +x megaupload_dm.rb
  3. Install the dependencies
  4. Dump all your megaupload links to a file named megaupload_list.txt in the current directory. You can also insert any content or comments in this file
  5. Tweak the LinkFile and OutputDirectory variables to your liking.
  6. Limit the download rate by adding --limit-rate=100k to the wget command at line #43 (optional)
  7. Run it using ./megaupload_dm.rb
  8. Your files are downloaded to ~/Downloads directory!


#!/usr/bin/env ruby


# Is the url a valid one?
def valid?(url)
  url =~ %r|^http://www.megaupload.com/\?d=[a-zA-Z0-9]+$|
end

require 'rubygems'
require 'nokogiri'
require 'open-uri'

LinkFile = "./megaupload_list.txt"
OutputDirectory = "/home/vikas/Downloads"
TempFile = "/tmp/megaupload-tmp.html"


File.open(LinkFile, 'r') do |f|
  f.each_line do |link|

    link.strip!

    next unless valid?(link)

    begin

      puts "Fetching information for #{link} ..."
      `wget "#{link}" -U "Firefox/4.0.0" --quiet --output-document #{TempFile}`

      doc = Nokogiri::HTML(File.read(TempFile))

      # Filename and Direct Link
      fname = doc.css("div.downl_main div.download_file_name").first.content
      direct_link = doc.css("div.downl_main div.download_member_bl a").last.attributes['href'].value

      # Actual download
      puts "Downloading #{fname} ... \n"
      `wget -c -P "#{OutputDirectory}" -U "Firefox/3.6.3" "#{direct_link}"`
      puts "*********************************** DONE *******************************************"
      puts "\n\n"

      # Sleep 5 secs. Take some load off megaupload ;)
      sleep 5

    rescue
      puts "ERROR! FileNotFound or otherwise; Skipping this file...."
      puts "\n\n"
      next
    end

  end
end

Tuesday, January 3, 2012

A script to download wallpapers from wallpaperswide.com


Going through wallpapers websites like http://www.wallpaperswide.com/ and ever wondered how cool would it be if you have a magic wand with which you could download all the wallpapers without having to manually download the ten images in all of the 2500-odd pages?

Your search ends here!

Dependencies
  1. This script is written in Ruby. Most Linux distros ship a basic version of this scripting language. If not, install it along with rubygems using the following commands.
  2. This script needs wget utility to download. wget is shipped with almost all linux distros. If not, install it using your distro's package manager.
# If you don't use a Debian-based distro like Ubuntu or Linux Mint, replace 'apt-get' with your distro's package manager

# Install 'ruby' and 'rubygems' if not already installed
sudo apt-get install ruby rubygems

# Install 'wget' if not installed
sudo apt-get install wget

# Install 'nokogiri' gem
gem install --no-r{doc,i} nokogiri
  1. Save the following ruby code into a file named wallpaperswide_dot_com.rb
  2. Choose the resolution of the wallpapers you want to download. The list of resolutions supported are given in the beginning of the script. Tweak Resolution variable accordingly.
  3. Adjust the Output_Directory variable.
  4. Run the script by ruby wallpaperswide_dot_com.rb
  5. It'll be a 2+GB download depending on the resolution. If you desire to limit your download speed to say 100kbps, you could add an additional argument to the wget command (the third line from the end) like wget -c --limit-rate=100k -U "Firefox/4.5.6" ...;
#!/usr/bin/env ruby
#
#  Vikas Reddy
#
#  A little script to download ALL the wallpapers of a given
#  resolution from http://www.wallpaperswide.com/
#
#  Requirements
#  ============
#  Ruby Version: 1.9.2
#  Gems: nokogiri, open-uri
#  Other programs: wget
#
#
#  Available Resolutions
#  =====================
#
#  Wide
#  
#  * 16:10 960x600
#  * 16:10 1152x720
#  * 16:10 1280x800
#  * 16:10 1440x900
#  * 16:10 1680x1050
#  * 16:10 1920x1200
#  * 16:10 2560x1600
#  * 16:10 3840x2400
#  * 16:10 5120x3200
#  * 5:3 800x480
#  * 5:3 1280x768
#  
#  HD
#  
#  * 16:9 960x540
#  * 16:9 1024x576
#  * 16:9 1280x720
#  * 16:9 1366x768
#  * 16:9 1600x900
#  * 16:9 1920x1080
#  * 16:9 2048x1152
#  * 16:9 2400x1350
#  * 16:9 2560x1440
#  * 16:9 3554x1999
#  * 16:9 3840x2160
#  
#  Standard
#  
#  * 4:3 800x600
#  * 4:3 1024x768
#  * 4:3 1152x864
#  * 4:3 1280x960
#  * 4:3 1400x1050
#  * 4:3 1440x1080
#  * 4:3 1600x1200
#  * 4:3 1680x1260
#  * 4:3 1920x1440
#  * 4:3 2048x1536
#  * 4:3 2560x1920
#  * 4:3 2800x2100
#  * 4:3 3200x2400
#  * 4:3 4096x3072
#  * 5:4 1280x1024
#  * 5:4 2560x2048
#  * 5:4 3750x3000
#  
#  Mobile Ratio
#  
#  * VGA 240x320
#  * VGA 480x640
#  * VGA 320x240
#  * VGA 640x480
#  * WVGA 240x400
#  * WVGA 480x800
#  * WVGA 400x240
#  * WVGA 800x480
#  * HVGA 320x480
#  * HVGA 480x320
#  * HVGA 640x960
#  * HVGA 960x640
#  * iPad 1024x768
#  * iPad 768x1024
#  * HD 16:9 480x272
#  * HD 16:9 272x480
#  * Phone 176x220
#  * Phone 220x176
#  
#  Dual
#  
#  * 4:3 1600x600
#  * 4:3 2048x768
#  * 4:3 2304x864
#  * 4:3 2560x960
#  * 4:3 2800x1050
#  * 4:3 2880x1080
#  * 4:3 3200x1200
#  * 4:3 3360x1260
#  * 4:3 3840x1440
#  * 4:3 4096x1536
#  * 4:3 5120x1920
#  * 4:3 5600x2100
#  * 4:3 6400x2400
#  * 4:3 8192x3072
#  * 5:4 2560x1024
#  * 5:4 5120x2048
#  * 5:4 7500x3000
#  * 5:4 10240x4096
#  * 16:10 1920x600
#  * 16:10 2304x720
#  * 16:10 2560x800
#  * 16:10 2880x900
#  * 16:10 3360x1050
#  * 16:10 3840x1200
#  * 16:10 5120x1600
#  * 16:10 7680x2400
#  * 16:10 10240x3200
#  * 5:3 1600x480
#  * 5:3 2560x768
#  * 16:9 1920x540
#  * 16:9 2048x576
#  * 16:9 2560x720
#  * 16:9 3200x900
#  * 16:9 3840x1080
#  * 16:9 4096x1152
#  * 16:9 4800x1350
#  * 16:9 5120x1440
#  * 16:9 7108x2000
#  * 16:9 7680x2160
#  * 3:2 2880x960
#  * 3:2 4000x1333
#  * 3:2 2304x768
#  
#  Other
#  
#  * 3:2 1152x768
#  * 3:2 1440x960
#  * 3:2 2000x1333


require 'open-uri'
require 'nokogiri'

Resolution = "1600x900"
Base_URL   = "http://wallpaperswide.com/#{Resolution}-wallpapers-r/page/"
Output_Directory = "/home/vikas/Wallpapers/"

# Create the Output_Directory if needed
`mkdir -p "#{Output_Directory}"`

(1..2492).each do |page_num|

  # Go page by page
  url = Base_URL + page_num.to_s

  # Parse html
  f = open(url)
  doc = Nokogiri::HTML(f)

  # Loop over image-boxes
  doc.css("div.thumb").each do |wallp|

    # Extract wallpaper subpage url
    wallp.css("div[onclick]").attr("onclick").value =~ /prevframe_show\('(.*)'\)/
    subpage_url = $1
    subpage_url =~ %r|http://wallpaperswide\.com/[^/]+/([\w\d]+)\.html|

    # Generate url of the required wallpaper
    wallp_url = %|http://wallpaperswide.com/download/#{$1}-#{Resolution}.jpg|
   
    # Download... with a user-agent parameter just in case...
    # use '--limit-rate=100k' to limit download speed
    `wget -c -U "Firefox/4.5.6" -P "#{Output_Directory}" "#{wallp_url}"`
  end
end

Monday, January 2, 2012

A script to compile a latex-beamer presentation into all themes

I always wondered how great and handy would it be had I had a nifty script to compile my latex presentation using all the themes present in my current tex installation so that I can have a look at all the output files and choose the one which looks best.

And, this is the little bash shell script I've written to fulfill my need.
  1. Save this script into your Latex project directory
  2. Adjust the variables listed in the script to your liking ($themes_path and $output_directory)
  3. Give executable permission using "chmod +x $filename" or otherwise
  4. Execute it! It takes your ".tex" as the command-line argument and outputs the pdfs into a directory named "change_themes". There will be as many themes as your Beamer installation could possibly output.
PS: It'll not disturb/write to your original .tex file though. It just reads it.

#!/bin/bash

themes_path="/usr/share/texmf/tex/latex/beamer/base/themes/theme"
filename="$1"
output_directory="./change_themes"

# Needs the .tex file as the command-line argument
if [[ -z "$filename" ]]; then
    echo "Needs a command-line argument";
    exit;
fi

# Create the output directory if needed
if ! [[ -d "$output_directory" ]]; then
    mkdir -p "$output_directory";
fi

for theme in $themes_path/*.sty; do
    theme_name="$(echo "$(basename "$theme")" | sed -re 's/beamertheme(.*)\.sty/\1/')"
    sed -r "
s/usetheme\{(.*)\}/usetheme{$theme_name}/" "$filename" > "$output_directory/$filename"
    pdflatex -output-directory="
$output_directory" "$output_directory/$filename"
    pdf_fname="
$(basename "$filename" ".tex").pdf"
    pdf_fname_new="
${theme_name}_${pdf_fname}"
    mv "
$output_directory/$pdf_fname" "$output_directory/$pdf_fname_new"
done