28 Dec 2011 @ 8:00 PM 

Hello,

A very short one today, but it is usually not very well known.

As you know, Unix uses signals to communicate with processes. A process receives a signal, then takes an action. If no action is defined, the default action for most signals is to kill the process… which is maybe something you want to avoid.

The problem.

You have written the best script in the world, that does… let’s say… make ice-cream (we can dream).

The problem is that if someone kills your script (i.e. kill -15, SIGTEM, which is the default one), you process will simply stop, and throw ice-cream everywhere on the walls. (Ice-cream is here an image for temporary files).

How can you catch this message and close the ice-cream tap and clean-up a bit ?

The Solution.

Use trap.

At the beginning of your script, you can add the following line (after the shebang) :

trap 15 cleanup

This will trap the default SIGTERM, then start the procedure “cleanup” ( you need to define the “cleanup” procedure somewhere).

You can define a function that will do the cleanup for you, maybe add a message in the log, kill sub-processes, send mail, whatever is needed.

The only thing, is that you need to fail proof your function, in order to be sure that your process is not going in an infinite loop, because the way to kill it afterwards is to use another kill.

Note also that not all the signals might be trapped. Some of them, like the signal 9 (SIGKILL) is not trappable in a script.

Thank you for reading, and if I find some time, I’ll talk tomorrow about an inventive way to use the trap function.

Posted By: Dimi
Last Edit: 27 Dec 2011 @ 08:44 PM

EmailPermalinkComments (0)
Tags
Tags: , , ,
Categories: Bash, Signals, Snippets
 27 Dec 2011 @ 8:03 PM 

So today, we’ll see how to pass a variable (and not the content of a variable) to a function.

The Problem

You need to create a function that updates one or multiple variables given as parameter. For example, let’s say you want to create a “to_upper” function, that will take a variable as parameter, then modify it.

If you do it like this :

to_upper $mytext

You will just receive the content of the variable. Then, what do you want to do with it ? Print it ? You can not return it to the variable, as you do not know it.

The solution :

Write your code like this :

function to_upper  {
eval _text=\$$1
export $1=`echo “${_text}” | tr [a-z] [A-Z]`
}

and you can execute it like this :

toto=”hello”

to_upper toto

echo $toto

HELLO

From there, you can execute everything, like SQL queries, and return them in the calling variable or in another variable… Of course, you still need to protect your function, checking that the given parameter is a name of a variable, and not the variable itself.

Thank you for reading !

Posted By: Dimi
Last Edit: 05 Jan 2012 @ 07:52 PM

EmailPermalinkComments (0)
Tags
Tags: ,
Categories: Bash, Snippets
 23 Dec 2011 @ 8:00 PM 

A short one for today, then it’s week-end and I will only post on Tuesday (Monday is holiday).

In a script, one of the things which is usually overlooked is the validation of variables. And if you are developing a library of functions (for scripts), it is simply forgotten most of the times. This is one of the reason you are not re-using the functions made by your fellow developers : if you want them to work for your specific purpose, you have to specify a thousand of options that you will never use.

How can you make a function using optional variables (meaning that it will resort to default ones when nothing is specified) without writing dozens of lines of code ?

By using the little trick described in this article. This is how you define a default variable usually :

D_LOG_LEVEL=9

if [ “$1” = “” ]  #In case it is defined on the command line

then

LOG_LEVEL=${D_LOG_LEVEL}

else

LOG_LEVEL=$1

fi

 

Half your script went to write this wonderful code. Now, let’s make it faster.

First, we can replace the ifs with the one we’ve seen earlier this week :

 

D_LOG_LEVEL=9

([ “$1” = “” ] && LOG_LEVEL=${D_LOG_LEVEL}) || LOG_LEVEL=$1

Next step, we can get rid of the variable “D_LOG_LEVEL, as it is used only once.

([ “$1” = “” ] && LOG_LEVEL=9) || LOG_LEVEL=$1

Last step, let’s get rid of the not very nice “”=”” :

([ -z $1 ] && LOG_LEVEL=9) || LOG_LEVEL=$1

This allows you to define variable that are going to resort to default if it is not specified, without loosing too much time writing the code.

Note that if you define variables that need to be set before you send

([ -z ${LOG_LEVEL} ] && LOG_LEVEL=9)

Last thing about the variables in the function : You should test them before you run your function or script. This sounds obvious, but it is not usually the case (resulting is headache for people doing the support). If your script needs a filename as parameter and needs this file to exist, writing this line :

([ -z $1 ] || [ ! -f $1 ]) && (echo “File $1 Missing” && return 1)

is not going to take you much time and will help the next guy who is going to use your script.

Thank you for helping the Unix Scripting World to be a better place, and see you on Tuesday !

Posted By: Dimi
Last Edit: 23 Dec 2011 @ 04:02 PM

EmailPermalinkComments (0)
Tags
 22 Dec 2011 @ 8:00 PM 

As you know, Unix has different ways of chaining commands.

The most well-known way to chain commands in Unix is the pipe (|).

Using a pipe, the output (stdout) of the first application is sent as input (stdin) of the second command, and it is done like this :

echo “hello world !” | sed -e ‘s/h/H/’

(which is, by the way, a difficult way to do something simple)

If this is something new for you, you should write it down immediately : it is bound to come in any interview you will do if you have “unix”,”linux” or something equivalent in your resume…

File Descriptors Reminder (Skip it if you understand 2>&1)

A File descriptors are the way Unix is communicating with you. The main ones (the default ones) are 1 for stdout and 2 for stderr.

If you do the following :

find / -type f

You will get a lot of error messages (if you are not root. If you are root, you should change to your own user immediately, and respectfully tell your System Administrator that you have done something bad. [And, by the way, you should always talk respectfully to your System Administrator])

The normal messages will be sent to 1>, the errors will be sent to 2>, which means that you see both on screen, because both are redirected to your screen by default. If you do this :

find / -type f >/tmp/find.out 2>/tmp/find.err

then the errors will be sent to find.err, and the stdout will be sent to find.out.

Now, how would you send both outputs to the same file ? One way is this :

find / -type f >>/tmp/find.all 2>>/tmp/find.all

But it’s long, and we are lazy. The other way to do is this :

find / -type f >/tmp/find.all 2>&1

which means “Send stdout to find.all and stderr to stdout”. &1 represent stdout. The following would work as well :

find / -type f 2>/tmp/find.all >&2

OK, let’s go back to pipe now.

Pipes and file descriptors

The pipe only passes stdout (1>) which means that whatever you have in the stderr (2>) is not redirected.

Let’s imagine you do a find, then you pipe it to grep “toto”, case insensitive because your find is not able to do it :

find . -type f | grep -i toto

If you do not have the rights on certain files or directories, some error messages will pour, and you think you can get rid of them by doing :

find . -type f | grep -i toto 2>/dev/null

Well, no luck. Error messages are still there, as the stderr is not passed through the pipe… The command that gets rid of the error messages generated by find is :

find . -type f 2>/dev/null | grep -i toto

Pipes and return code

The return code of a pipe is always the last command to be on the line of pipe, none of the other will return something to the next line (it is returned to the next piped command, if you want to check it, but is is pretty useless).

This was another one of the basics, I hope you enjoyed it !

 

Posted By: Dimi
Last Edit: 21 Dec 2011 @ 09:33 AM

EmailPermalinkComments (0)
Tags
 21 Dec 2011 @ 8:00 PM 

Tomorrow, we will talk about pipes (and file descriptors). But today, let’s talk about the other ways to chain commands.

One of the easy way for you to have to commands running one after the other, without you staying in front of the screen (see you at Starbucks), is to chain them using a semi-column :

RunVeryLongProcess;PrintReportOnVeryLongProcess;SendMailToBoss

Once you will have defined the commands “RunVeryLongProcess“, “PrintReportOnVeryLongProcess“, and “SendMailToBoss“, you will get a pay rise because the three jobs have been running one after the other, for 9 hours, and you have send a mail to your boss at 11pm ! How dedicated are you ! (At least to a Venti Macchiato :-)).

Yep, but what if RunVeryLongProcess fails ? Oops… You have send an empty report to your customer, and a mail to your boss saying that everything’s OK…

To avoid this, you can use the logic operators “and” (“&&”) and “or” (“||”), e.g.

RunVeryLongProcess && PrintReportOnVeryLongProcess && SendMailToBoss

Now, if “RunVeryLongProcess” or  “PrintReportOnVeryLongProcess“, no mail will be sent to your boss. You might not be the employee of the week, but you will still have a job 🙂

This “and” operator means that if the first job succeeds, the next one will be kicked. The keyword here is if, as you can do the same with an if statement, like this :

RunVeryLongProcess

if [ $? -eq 0 ]

then

PrintReportOnVeryLongProcess

if [ $? -eq 0 ]

then

SendMailToBoss

fi

fi

I’m tired only to write this, and it is only for three processes ! So, you can now make an extensive use of the && operator.

You can also use it to replace an if statement

&& operator instead of if statement.

Let’s say you want to check the variable “a”, if it is set to “morning” you say hello, and if not, you say “goodbye”…

(you can write the if, i’ll skip it, i’m tired)

And using && operator :

([ “$a” = “morning” ] && echo “Hello” ) || echo “goodbye”

You must understand how this is working. It starts by executing the first function, in this case, a test :

[ “$a” = “morning” ]

Then, if the result is true (return code 0), it tries the other part of the equation, i.e. :

echo “hello”

If the first part of the equation is wrong, then there is no need to execute the second part. The block around the && operand will be false. It then goes to the third part, which is supposed to be executed if the && equation is false, i.e.

echo “Goodbye”

If you want to use this, you must be sure that whatever function or command you call (except the first part of course) is going to return 0. Otherwise, you can get strange results, as both the “then” and “else” commands will be executed.

There is a way to avoid it, but it is getting complicated :

([ “$a” = “morning” ] && (say “hello” || true)) || say “goodbye”

By doing this,  even if the “say” function returns a non-zero status code, the “true” is executed. True always return “0” upon execution, so this is a way to avoid bizarre results.

This is the further you should go. Starting from this, there is no point to use the && operand anymore. Just write the if statement, and the guy fixing your script later will be thankful (and it might be you).

Regarding the order of execution, the parenthesis do not change it. The operations are executed from left to right.

(btw, the “say” command exists under Mac OS X and is quite fun).

That’s all for today. I hope you have enjoyed it, and see you tomorrow for a refresh on the pipes !

 

Nota bene

The reason why the parenthesis are working is that everything inside parentheses is in a subprocess, and the result is thus the one of the whole subprocess (the whole command, with both commands inside).

Therefore, you should not use this to pass a variable, as the subprocess is not going to affect the context of the main process !

Posted By: Dimi
Last Edit: 05 Jan 2012 @ 07:51 PM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: Bash, Snippets
 20 Dec 2011 @ 8:00 PM 

Ok, a quick one today.

If you have never used sed, it is time to do so.

sed stands for Stream EDitor. It is a way to make quick changes in a file or a stream (between pipes). It is extremely powerful, and very fast.

Its main function is to search for strings, then apply changes to it. A small example :

sed -e ‘s/hell/heaven/g’ file >file2

This would parse the file, and changes all the occurrences of “hell” by “heaven“. Try to do that without sed (even awk will make it quite difficult).

On the above example, the ‘-e‘ is not useful. You can forget it, but I prefer to put it all the time, because it becomes mandatory if you want to chain sed commands, i.e. :

sed -e ‘s/hell/heaven/g’ -e ‘1d’ file >file2

This would replace the “hell” with “heaven” in the file, then delete the first line of the file.

Some version of sed allow for in-place changes, meaning that you do not have to use a second file to write the output. However, this is not available in all versions and systems, so I would discourage the use if you are moving from one server to another.

The real power of sed are the REGEX, or REGular EXpressions.

Small example : Imagine you want to comment out all the scripts called “Boom.sh” that are called inside the other script “Badaboom.sh”, a script 123000 lines long… How do you do that ? vi ? vim ? notepad ?

sed -e ‘/Boom.sh/s/^/# /’ Badaboom.sh >Badaboom2.sh

Done !

The first part of the command ‘/Boom.sh/’ searches for all the lines that contains the string between the slashes.

The second part of the command ‘s/^/# /’ replaces, on the found line, the beginning of the line (^) with a hash-space string (# ).

REGEXs can become very useful, and if you are working with Unix, it is time for you to know the basics of this super tool sed.

 http://www.grymoire.com/Unix/Sed.html

Thank you for reading,

Dimitri

Posted By: Dimi
Last Edit: 21 Dec 2011 @ 09:34 AM

EmailPermalinkComments (0)
Tags

 Last 50 Posts
Change Theme...
  • Users » 66
  • Posts/Pages » 25
  • Comments » 4
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight