Tag Archives: bash

Setting Default Browser in Linux When Things are Stubborn

Running XFCE4 and setting the preferred applications generally works. However, I noticed that some applications still would not listen to my demands of using a specific browser when launching links. For example, using python’s webbrowser module, and therefore almost any other module opening external links, would use Firefox or Opera (depending on which was available).

# Checking python invocation of a browser
import webbrowser as w
w.get().open('http://mutaku.com')

Here’s the culprit, check in your /etc/alternatives/ directory and you’ll see what I mean.

ls -l /etc/alternatives/

Notice the entry x-www-browser. This will be a symlink to a browser and this value was different than what I was attempting to set through the XFCE4 GUI configuration. This problem seems somewhat common as I’ve seen it occur in KDE and GNOME as well.

To fix this issue, simply run the following command and you should be presented with a list of browsers from which you can set.

# Update your browser choice and choose from the presented text menu
sudo update-alternatives --verbose --config x-www-browser

# Now make sure teh proper symlink is setup
ls -l /etc/alternatives/x-www-browser

Now you should have a functional default choice. Those GUIs can suck it!

Checking your SSH key Fingerprint

As many of you probably are already well aware, Github recently was made aware of a mass-assignment vulnerability (1). If you aren’t caught up, you can find the info here on the Github blog (2, 3).

This afternoon, Github sent out an email to users that they had disabled all SSH keys for a users’ accounts and they would remain suspended until validated. You can validate your keys by visiting the SSH keys section under your account settings where you will then be able to reject or validate each of your stored keys. These are listed by machine ID and key fingerprint.

But how do you know the fingerprint of your SSH key to ensure it is the correct one for each machine ID? Simply SSH into each machine and run the following, replacing the location of the key to match where you have yours stored:

ssh-keygen -lf ~/.ssh/id_rsa.pub

Analyzing Brute Force Attempts in BASH

If you have a public facing SSH server running on the standard port, your message log is probably filled with failed authentication attempts from brute force attacks. Let’s mash up some quick BASH commands to analyze this data. For our purposes, we’ll look at the top attacker IPs and the top usernames tried.

First, pull down all of your message files and decompress them in a working directory using bunzip2.

Once you have all your message logs ready, we will search through them and pull out all of the authentication failure entries and grab the IP and username for each attempt.

grep -r "authentication error" messages* | awk '{split($0,a," "); print a[NF],a[NF-2]}' > attempts

So we first use grep to look for failed authentications by searching recursively for the string “authentication error” in all of our message logs by using the wildcard. We then pipe this to awk and split each input line found into an array delimited by a whitespace. The last part of each line, and therefore our new array, goes something like: ‘authentication error for USERNAME from IP’. So to get the username and IP from our array we can use the array length variable NF and use that to index the variables we need. Here we grab the last using a[NF] and two in from that with a[NF-2]. Finally, we output this to a file called attempts.

Now, let’s use more BASH magic to do the analyses for us. Our attempts file now is in the format as follows: IP USERNAME. We want to see the top IPs and usernames and we can do that with some sorting commands.

cut -d ' ' -f2 attempts | sort | uniq -c | sort -nr > attempts_username
cut -d ' ' -f1 attempts | sort | uniq -c | sort -nr > attempts_ip

Here, we simply grab either the username in the second column or the IP in the first with cut, sort this data, prepend lines with the number of occurrences of each, and then sort by this occurrence number and output to a new file. You can now view attempts_username and attempts_ip to see the top usernames and IPs, respectively, of brute force attacks.

Lastly, we can associate the keep usernames and IPs together and sort on one or the other to see the correlation between the two. To end our initial analyses, we will sort on usernames and find out for the top attempted usernames, what are the top originating IPs.

 sort -k 2,2 attempts| uniq -c | sort -nr | head -n 10

Next time we will be using some GEOIP methods to see where our top attack attempts are originating.

Bash Snippet for Git Users

Keeping track of what branch you are using while in a git repository can be a bit of a pain. Adding the following code to your ~/.bashrc file can make it a bit easier since it will prepend your prompt with the current branch.

function parse_git_branch {
    GIT_BRANCH=$(git branch --no-color 2> /dev/null | awk '{if ($1 == "*") { printf "(%s) ",$2}}')
    PS1="$GIT_BRANCH\u@\h:\w\$ "
}

PROMPT_COMMAND="parse_git_branch"
Here it is working:
matt@desktop:~$ source /etc/bash.bashrc 
matt@desktop:~$ cd programming/bits
(master) matt@desktop:~/programming/bits$ 

Python – Dynamically Printing to Single Line of Stdout

It’s that time again; code snippet time. This is a short and sweet bit of a code that has some really practical and powerful implications.

When handling large amounts of data processing, it’s often desirable to keep the user informed of where things are in terms of progress. However, filling up the scrollback buffer with thousands of “Completed processing [item X]” or “X % completed” is not always desirable. So, how do we handle verbosity without filling scrollback?

Here, we will print data using stdout but we will continually use the same line of the terminal. It’s extremely simple; before each data print, we will move the cursor back to the beginning of the current line, cut out the entire line of text, and then print our current bit of data.

*Note: This is using ANSI escape sequences and therefore will work on any VT100 type terminal.

 

The code:

import sys

class Printer():
	"""
	Print things to stdout on one line dynamically
	"""

	def __init__(self,data):

		sys.stdout.write("\r\x1b[K"+data.__str__())
		sys.stdout.flush()

 

To output data we might do something like:

totalFiles = len(fileList)
currentFileNum = 1.0

for f in fileList:
	ProcessFile(f)
	currentPercent = currentFileNum/totalFiles*100
	output = "%f of %d completed." % (currentPercent,totalFiles)
	Printer(output)
	currentFileNum += 1

 

And here is an example of what our output would look like in a practical scenario.

Processing some data

Progressing along with dynamic status updates

Simple. You can test it out using an 'for x in range(1,100)' sort of statement as well. Now we can keep a persistent display of data processing or program progress without slamming the scrollback buffer of the user's terminal.

Installing Google Chrome in Slackware 13 (13.37)

Trying to install Google-Chrome via RPM (with rpm2tgz) will give you an error with libnss3. Don’t fret, there is already a way to build Google-Chrome without much effort. Below is a quick way using the mirror from Penn State (carroll), but if you have your install media, you can find the same files there in /extra.

Go to: ftp://carroll.cac.psu.edu/pub/linux/distributions/slackware/slackware-13.37/extra/google-chrome/

The README file is pretty self explanatory, but essentially you want to do the following:

  1. Become rootSetup a build directory (mkdir chrome && cd chrome)
  2. Wget these files: [ ftp://carroll.cac.psu.edu/pub/linux/distributions/slackware/slackware-13.37/extra/google-chrome/google-chrome.SlackBuild ] [ ftp://carroll.cac.psu.edu/pub/linux/distributions/slackware/slackware-13.37/extra/google-chrome/google-chrome-pam-solibs-1.1.3-i486-1.txz ] [ ftp://carroll.cac.psu.edu/pub/linux/distributions/slackware/slackware-13.37/extra/google-chrome/ORBit2-2.14.19-i486-1.txz ] [ ftp://carroll.cac.psu.edu/pub/linux/distributions/slackware/slackware-13.37/extra/google-chrome/GConf-2.32.1-i486-1.txz ]
  3. Change execution bit on slackbuild file and run it (chmod +x google-chrome.SlackBuild && ./google-chrome.SlackBuild)
  4. Go to tmp and install the file (cd /tmp && upgradepkg –install-new google-chrome.txz)
  5. Go back and install the other files (cd /root/chrome/ && upgradepkg –install-new *.txz)

Now you should have a fully functional google-chrome install.

Grep a Tail to Monitor Your Access Log in Color

Here is a quick example of how to monitor your access log and colorize the IP address of the remote user:

tail -f /var/log/lighttpd/access.log | egrep --color=auto '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' 

It is crude, but works well enough (generally). The first part tails our webserver access log using the -f switch. This is piped to egrep with the –color=auto switch to colorize the portion we are want, which is an IP address. The result is that the IP address of the remote user of the request is colorized. This makes it easier while monitoring such logs in a terminal. It isn’t perfect and you can modify the regular expression to further foolproof it, but it works well enough for simple differentiation.

We can further adapt this slightly. Here are two more examples using this scheme. The first could filter out your local subnet, while the second filters out a local email client.

tail -f /var/log/lighttpd/access.log | grep -v "192.168.1." | egrep --color=auto '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' 

tail -f /var/log/lighttpd/access.log | grep -v "Thunderbird" | egrep --color=auto '[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}' 

As you can see, the we know pipe the tail to grep with the -v switch which will filter out any line entries that contain the quoted text. You could add anything here such as IP addresses, particular page requests, certain domains, etc.

Quick. Easy. Adaptable.

Cook your RAW photos into JPEG with Linux

NOTE: This is a ported post from the old Mutaku site. Original post information:
Thursday, August 14 2008 @ 04:50 PM EDT Contributed by: xiao_haozi

Shooting in RAW format is great for the photography buff. However, when it is time to share with others, post on your photo gallery, or print at the local photo printing shop, you want something more portable. Here we’ll look at converting RAW photos (specifically Canon’s CRW format) into JPEGs using a batch command-line approach in BASH.

We will be using two tools to do our batch conversion:

1- DCRAW (available here)

2- CJPEG (available here)

CJPEG is part of the standard JPEG library (libjpeg) so you should already have that installed. If not you can find the source here and built as such:

./configure --enabled-shared
 make 
 make install (as root, e.g. sudo make install)

DCRAW is available here and can be built as such:

wget http://www.cybercom.net/~dcoffin/dcraw/archive/dcraw-8.87.tar.gz
tar xzvf dcraw-8.87.tar.gz
cd dcraw-8.87
./configure
make
make install (as root, e.g. sudo make install)

Now that we have the prerequisites out of the way, we can try it out. We’ll manually do the conversion to ensure it works. Make sure you then compare the images and ensure they meet your needs/desires with the settings we use below.

In our example we will assume we have a directory called trip08 on our user’s desktop that contains some CRW raw images as well as others. We will go into this directory, make a CRW directory to copy our RAW images to for converting, search here and all subdirectories for CRW files and copy to our new directory, and then convert them using the two tools above so we will have in our new directory a copy of the original RAW CRW image as well as a new JPEG version.

Here we go:

cd ~/Desktop/trip08 && mkdir CRW_conversions
find . -name \*.[Cc][Rr][Ww] -exec cp "{}" CRW_conversions/ ";"

First, we went into our trip photo directory and made a directory called ‘CRW_conversions’ that we will work within. Next, we looked in the current trip directory (and subdirectories) for all files that end in ‘.crw’ including and variations on capitalization. The files we found are transfered to our CRW_conversions directory.

cd CRW_conversions
for i in *; do dcraw -c "$i" | cjpeg -quality 100 -optimize -progressive > $(echo $(basename "$i" ".crw").jpg); done

Now we went into our CRW_conversions directory that contains copies of our RAW images. We then constructed a loop which:

1- for i in *; do for each element in this directory do the following
2- dcraw -c “$i” | use dcraw to decode the element and pipe it out
3- cjpeg -quality 100 -optimize -progressive > accept the pipe from dcraw with cjpeg and convert to 100% quality jpeg with optimization and progressive processing and output that
4- $(echo $(basename “$i” “.crw”).jpg) take that output and put in a file called element.jpg where element is the name of the original CRW version without the extension ‘.crw’
5- ; done finish the loop once the iterations are done

We should now have a directory called CRW_conversions in our trip08 directory that contains copies of all our RAW images as well as fresh versions of each in JPEG format. The settings we used to convert have given us fairly high quality images that are still relatively large, so you may wish to alter these to your liking. You can do so by altering the CJPEG part of the conversion and find the options here:

man cjpeg

If you find that you have errors of ‘Empty input file’ and resultant JPEG files that are 0 bytes in size, you most likely have left out the ‘-c’ switch in your DCRAW part. This is needed to send the output with the pipe, or else you will end up with PPM files.

You can place most of this code into a shell script to do the brunt of the work for you without having to type in each time. We are currently working on a Python version which uses Tkinter to provide a GUI interface as well as options to convert and manipulate several image formats and will post source code and a how-to once it is finished. However, if you want to try out a shell version now you might want to try something like:

#!/bin/env bash
#######################################
# crw2jpg                                                           
#      convert canon raw into jpeg                       
#######################################
mkdir CRW_conversions
find . -name \*.[Cc][Rr][Ww] -exec cp "{}" CRW_conversions/ ";"
cd CRW_conversions
for i in *; do dcraw -c "$i" | cjpeg -quality 100 -optimize -progressive > $(echo $(basename "$i" ".crw").jpg); done
print "CONVERSIONS COMPLETE"

This is a very basic script without any real error control or logic, so add to it as you wish/see fit. If you wish to keep as such, you should just make sure that you don’t already have a conversion that was run in the existing directory. To run this:

#(copy the above script)
nano ~/bin/crw2jpg
#(paste and save)
chmod +x ~/bin/crw2jpg
cd to_directory_with_new_photos/
crw2jpg

Enjoy cooking those RAW images, and be sure to check back for our conversion GUI. Good luck!

Backing up a personal DVD

NOTE: This is a ported post from the old Mutaku site. Original post information:
Sunday, January 02 2011 @ 12:18 AM EST Contributed by: xiao_haozi

Sometimes you find the need to backup a personal DVD. There are many GUI tools available that are up to the task, but sometimes some good, old command-line magic is just what you need to get the job done… especially with some more troublesome tasks.

I recently received a personal video that had been burned to a dual layer DVD and region coded. I wanted to make a copy for myself but had some trouble getting the DVD to open in tools like K3B and K9Copy. So I went to some old command-line magic to get things done the simple and old school way.

I will simply outline each step quickly to give you an idea of how things are done and let you tweak things to meet your situation. Hopefully, things will be pretty evident where changes will need to be made to meet your personal setup such as device identifiers, file names, and audio/video quality preferences.

Also note, this is for personal, non-encrypted DVDs. This won’t work on commercial DVDs that use CSS based encryption and isn’t a tutorial intended for those purposes. This tutorial is aimed to backup your personal videos.

To briefly summarize, we will first do a direct copy of the DVD drive contents and make a temporary file with our project name “this_project”. We then will encode the contents using dvd quality parameters with mencoder (and these can be adjusted to suit your own space/quality limitations and desires). After creating the new DVD project with dvdauthor, giving use the appropriate video_ts and audio_ts directory structures, we write this to a DVD with growisofs.

The actual rip part:

dd if=/dev/sr0 of=this_project.iso

The mencoder part (this is all a one-liner) to create the DVD quality mpeg:

mencoder -oac lavc -ovc lavc -of mpeg -mpegopts format=dvd:tsaf\
 -vf scale=720:480,harddup -srate 48000 -af lavcresample=48000  \
 -lavcopts vcodec=mpeg2video:vrc_buf_size=1835:vrc_maxrate=9800:\
vbitrate=5000:keyint=18:vstrict=0:acodec=ac3:abitrate=192:autoaspect -ofps 30000/1001\
 -o this_project_dvdqual.mpg this_project.iso

Making the iso- create this xml called this_project.xml… here we won’t make any chapters:

<dvdauthor>
  <vmgm>
    <titleset>
      <titles>
        <pgc>
          <vob file="this_project_dvdqual.mpg">
        </pgc>
      </titles>
    </titleset>
</dvdauthor>

Now we will author the DVD with the structure we called for in the xml file and then burn the DVD iso image:

dvdauthor -o this_project -x this_project.xml
growisofs -dvd-compat -Z /dev/sr0 -dvd-video ./this_project_dvd/

As you can see, it’s a pretty simple process. The most intimidating step is creating the temporary mpeg file with mencoder, but you can easily follow the writeup here if you wish to change any of the parameters. However, the ones I have chosen and outlined above should give you a pretty nice standard NTSC DVD format video. It’s really not that bad.

Setting up a linux dialup connection

NOTE: This is a ported post from the old Mutaku site. Original post information:
Tuesday, June 19 2007 @ 06:20 PM EDT Contributed by: xiao_haozi

My dirty walk-through to setting up an external modem in Linux using wvdial to dial into an ISP connection.I had some trouble finding a single place with a good write-up for doing such a task when I was setting up a Linux box for someone that had previously been using Windows and a little OS X. Finding dialing info, modem configuration, or slight troubleshooting, alone wasn’t too difficult, but finding them in one place in a concise manner was. So I have attempted to compile what I have found, my tweaks and adjustments, and other hints that enabled me to setup a Linux box to use a dial up connection via external modem.

As I mentioned, information for this task was extremely piecemeal, and finding any form of updated support for a modern Ubuntu distribution (Feisty Fawn – Kubuntu in particular) was like searching for a pay phone in Hollywood, or so I imagine. Similes aside, most of the information I could locate was for workarounds with internal winmodems (or lack thereof), or even the, “…yeah, why are you still trying to use dial up?” Regardless of the archaism of the technology, America is apparently still clinging to this method of exploration as evident by googling some statistics on broadband penetrance.

So there I was left with a monumental task for my new school networking knowledge (setting up firewalls, DHCP, NAT, whipping up some cat5, etc) and burgeoning Linux skills. In the end, I could hear that clunky, mystical modem speaker cranking away, and yes, the sweet, sweet splendor of waiting eons for those jpegs to load in my Google image search. Away we go –

The hardware and tools I had available were as follows (with a little explanation why where relevant):

* Linux box (AMD64) : trying to switch them to a free framework (and that which I find most fun)

* Kubuntu Feisty Fawn 7.04 (64bit version) : running KDE of course (I find KDE to be a little more user friendly for those coming from windows — bring on the flame war — but that is just mho)

* A few sporadic guides (I truly apologize for my lack of references but it I will try to add a list of sorts at the end)

* external modem : Trendnet TFM-560x v.92 56k serial modem (because internals are a pain to find — at least in my house — and most of mine were winmodems from scrapped dell boxes which use proprietary drivers from conexant — and because external modems are sexy)

* PCI serial card : Syba PCI multi I/O PCI 2 serial nm9835cv netmos dual ports (because the Asus motherboard I was building with didn’t have a serial port onboard — only Parallel — and I didn’t want to fight with a serial2usb approach)

* some command lines tools you will see during the guide

[WARNING: the guide to follow is what worked for me and is probably pretty crude and round-about. If you have a much more coherent and pretty way to do this, please leave comments for the others and myself. I am merely a computer hobbyist and a professional in any sort of computer related field by no stretch of the imagination (biochemistry to be specific) and accordingly this may be a little crude and ugly to some. Like I said, my Linux skills are not Jedi so bear with me please!]

First we need to find out where the PCI card is located and what the address of the serial ports are — we can do this many ways but I like to first check to see that the card is recognized and the like:

$ lspci -vv
 

We are looking for a ‘netmos’ PCI card and should see some info about it there with 5 regions listed all under the serial controller. You can also try scanpci if you wish.

Now we need the hex port address for the 2 serial ports on our card — I like to use dmesg and scan for it:

$ dmesg | grep "irq = 17"
 

You should see ttyS2 and ttyS3 entries — for example:

0000:04:09.0: ttyS2 at I/O 0x9c00 (irq = 17) is a 16550A
 

‘ttyS2′ tells us where — ’0x9c00′ tells us the hex-port ID — and ’16550A’ tells us the type.

 

So we can rerun this command by hitting up and then appending an output to save it to a file for reference:

$ dmesg | grep "irq = 17" > port_locations
 

We next need to install the tool to set the serial ports if we don’t already have it:

$ sudo apt-get install setserial
 

Since we now know where our PCI and serial ports are located we can set them to be used as follows by referencing our port_locations file we outputted the grep to (note: I have assigned both ports to the modem for ease in case it gets switched when I give it to the users. However, if you are going to be using this for something else keep that in mind — I haven’t tested it out, but I would make the broad assumption that setting both serials now shouldn’t affect adding in a scanner, let’s say, as long as the speeds are compatible and we have told the /dev/modem link to look at the right one):

$ setserial /dev/ttyS2 port 0x9c00 UART 16550A irq 17 Baud_base 115200

$ setserial /dev/ttyS3 port 0x9800 UART 16550A irq 17 Baud_base 115200
 

Now we will check that each is linked appropriately:

$ setserial /dev/ttyS2 -a

/dev/ttyS2, Line2, UART: 16550A, Port: 0x9c00, IRQ:17

Baud_base: 115200, close_delay: 50, divisor: 0

closing_wait: 3000

Flags: spd_normal skip_test

$ setserial /dev/ttyS3 -a

/dev/ttyS3, Line2, UART: 16550A, Port: 0x9800, IRQ:17

Baud_base: 115200, close_delay: 50, divisor: 0

closing_wait: 3000

Flags: spd_normal skip_test
 

Since we have plugged in our modem into the left most (closest to the motherboard) we are using ttyS2 so we will link this to /dev/modem for our dialer program to use (of course this is if you plugged your modem into the serial port closest to the mobo):

$ sudo ln -sf /dev/ttyS2 /dev/modem
 

and check it:

$ ls -l /dev/modem

lrwxrwxrwx 1 root root 5 2006-05-11 15:40 /dev/modem -> /dev/ttyS2
 

On reboot, this link we just created will be reset to the original location, i.e. something other than where our modem truly is located. So we need have this linked on startup and we can therefore create an init.d script to do such:

$ cd /etc/init.d/

$ sudo nano modem-link

#!/bin/bash

# modem-link : fix the /dev/modem link on startup

ln -sf /dev/ttyS2 /dev/modem
 

Set the execute bit and allow it to run at startup:

***** IMPORTANT: don’t forget that period after 99 2 3 4 5 when doing the update-rc.d below *******

$ sudo chmod +x modem-link

$ sudo update-rc.d -f modem-link startup 99 2 3 4 5 .
 

Now ones inclination would be to fire up KPPP or GPPP and use the gui to configure the dial up options. However, (at least in my build — and apparently most — see ubuntuforums.org) KPPP does not work. I managed to get it so far as to see the modem and attempt to dial, however the dial sequence was weird and would not dial properly. I really wanted a gui for this part because the users I was building this for would be dialing up and are not comfortable with the command line yet. And I wanted things to be simple – I didn’t want connecting to the internet to scare them from Linux back to windows or OS X. However, I knew that the easiest way (i.e. the most trouble-free method) would be to use a command line dialer and then script it somehow, so for now there interaction with the terminal would be a little less daunting. Let’s move on to my favorite command line dialer, wvdial.

Let’s get wvdial if we don’t already have it:

$ sudo apt-get install wvdial
 

Now we need to give wvdial our ISP information in the config file:

$ sudo nano /etc/wvdial.conf

[Dialer Defaults]

Phone = (your ISP number)

Username = (your ISP username)

Password = (your ISP password)

New PPPD = yes
 

Now lets use a KDE style launcher to do the launching of wvdial:

Open the Kmenu for editing and choose to add a new program.

We will name it (isp_name_here)-dialup, choose an icon of your liking, and then under the execution portion enter:

sudo wvdial
 

also choose to run in a terminal.

Now save it and exit.

To dial up now, all we have to do is find our little launcher we just made in the Kmenu, run it, and it will prompt us for our user password. Type in the password for the current user and you will see wvdial running in a terminal window. Once you see that you have acquired your IP address it will pause there and you should be ready to roll. Minimize your terminal window that wvdial is in and off to the races you go (well maybe not races as we are using dial up — maybe off to bingo?! we go).

Now to exit from our internet excursions and get our phone line back — maximize the terminal window and type ctrl-c (control + c) and this will exit wvdial and thus the connection.

So there you have it: a possibly very round-about way to get your dial up going. I found it to be a very elegant method. However, I am sure that reading through this, many have cringed, spit coffee at their monitors, or deleted my page from their bookmarks. But it worked for me and taught me some fun things about port assignment, writing startup scripts, and the like. I would appreciate your feedback: what worked, what didn’t work, did you do it another way, etc.

Good luck — modems ahoy~!