Monday, May 17, 2010

Force removing flashplugin-nonfree completely on ubuntu lucid

Got this error after upgrading from Ubuntu Hardy to Lucid:

flashplugin-nonfree: subprocess installed pre-removal script returned error exit status

Found this on debian forum:


rm /var/lib/dpkg/info/flashplugin-nonfree.prerm
### that is the most important step, the manual removing of the bad script ###
dpkg --remove --force-remove-reinstreq flashplugin-nonfree
### after the above step this is possible now ###
dpkg --purge --force-remove-reinstreq flashplugin-nonfree
### use this in order to purge everything is *package* related

That removed flashplugin completely and new installation is possible.

Friday, April 16, 2010

PVM on snow leopard

Sitll got this error after trying serveral times to execute the command from anywhere in the terminal:

sudo port install pvm
Password:
---> Computing dependencies for pvm
---> Building pvm
Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_science_pvm/work/pvm3" && /usr/bin/make all " returned error 2
Command output: making in . for DARWIN
building in src
cd src; ../lib/aimk CC="/usr/bin/gcc-4.2" F77="f77" install
making in DARWIN/ for DARWIN
/usr/bin/gcc-4.2 -O -DCLUMP_ALLOC -DSTATISTICS -DTIMESTAMPLOG -DSANITY -I../../include -DARCHCLASS=\"DARWIN\" -DIMA_DARWIN -DSOCKADHASLEN -DNOREXEC -DRSHCOMMAND=\"/usr/bin/rsh\" -DHASSTDLIB -DNEEDMENDIAN -DHASERRORVARS -DFAKEXDRFLOAT -DSYSERRISCONST -I/usr/include/malloc -I/System/Library/Frameworks/System.framework/Headers/bsd/sys -c ../../src/pmsg.c
../../src/pmsg.c:785: error: static declaration of 'xdr_float' follows non-static declaration
/usr/include/rpc/xdr.h:376: error: previous declaration of 'xdr_float' was here
../../src/pmsg.c: In function 'xdr_float':
../../src/pmsg.c:788: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c: At top level:
../../src/pmsg.c:794: error: static declaration of 'xdr_double' follows non-static declaration
/usr/include/rpc/xdr.h:377: error: previous declaration of 'xdr_double' was here
../../src/pmsg.c: In function 'xdr_double':
../../src/pmsg.c:797: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c:797: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c: In function 'enc_xdr_long':
../../src/pmsg.c:1395: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c:1417: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c: In function 'enc_xdr_ulong':
../../src/pmsg.c:1442: warning: passing argument 2 of 'xdr_u_long' from incompatible pointer type
../../src/pmsg.c:1461: warning: passing argument 2 of 'xdr_u_long' from incompatible pointer type
../../src/pmsg.c: In function 'dec_xdr_long':
../../src/pmsg.c:1763: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c:1773: warning: passing argument 2 of 'xdr_long' from incompatible pointer type
../../src/pmsg.c: In function 'dec_xdr_ulong':
../../src/pmsg.c:1848: warning: passing argument 2 of 'xdr_u_long' from incompatible pointer type
../../src/pmsg.c:1858: warning: passing argument 2 of 'xdr_u_long' from incompatible pointer type
make[2]: *** [pmsg.o] Error 1
make[1]: *** [s] Error 2
make: *** [all] Error 2


--> solution:
sudo rm -rf /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_science_pvm

and then:
cd /opt/local/var/macports/sources/rsync.macports.org/release/ports
sudo port install pvm
--> worked perfectly.

Tuesday, October 27, 2009

Adding a directory to the Path (Bash)

Executive Summary

Adding a directory to the path of a user or all users would seem trivial, but in fact it isn't. The best place to add a directory to the path of a single user is to modify that user's .bash_profile file. To add it to all users except user root, add it to /etc/profile. To also add it to the path of user root, add it to root's .bash_profile file.

Disclaimer

Obviously, you use this document at your own risk. I am not responsible for any damage or injury caused by your use of this document, or caused by errors and/or omissions in this document. If that's not acceptable to you, you may not use this document. By using this document you are accepting this disclaimer.

Pre and Post Pathing

Linux determines the executable search path with the $PATH environment variable. To add directory /data/myscripts to the beginning of the $PATH environment variable, use the following:

PATH=/data/myscripts:$PATH
To add that directory to the end of the path, use the following command:
PATH=$PATH:/data/myscripts
But the preceding are not sufficient because when you set an environment variable inside a script, that change is effective only within the script. There are only two ways around this limitation:
  1. If, within the script, you export the environment variable it is effective within any programs called by the script. Note that it is not effective within the program that called the script.
  2. If the program that calls the script does so by inclusion instead of calling, any environment changes in the script are effective within the calling program. Such inclusion can be done with the dot command or the source command. Examples:
    . $HOME/myscript.sh
    source $HOME/myscript.sh
Inclusion basically incorporates the "called" script in the "calling" script. It's like a #include in C. So it's effective inside the "calling" script or program. But of course, it's not effective in any programs or scripts called by the calling program. To make it effective all the way down the call chain, you must follow the setting of the environment variable with an export command.

As an example, the bash shell program incorporates the contents of file .bash_profile by inclusion. So putting the following 2 lines in .bash_profile:
PATH=$PATH:/data/myscripts
export PATH
effectively puts those 2 lines of code in the bash program. So within bash the $PATH variable includes $HOME/myscript.sh, and because of the export statement, any programs called by bash have the altered $PATH variable. And because any programs you run from a bash prompt are called by bash, the new path is in force for anything you run from the bash prompt.

The bottom line is that to add a new directory to the path, you must append or prepend the directory to the $PATH environment variable within a script included in the shell, and you must export the $PATH environnment variable. The only remaining question is: In which script do you place those two lines of code?

Adding to a Single User's Path

To add a directory to the path of a single user, place the lines in that user's .bash_profile file. Typically, .bash_profile already contains changes to the $PATH variable and also contains an export statement, so you can simply add the desired directory to the end or beginning of the existing statement that changes the $PATH variable. However, if .bash_profile doesn't contain the path changing code, simply add the following two lines to the end of the .bash_profile file:

PATH=$PATH:/data/myscripts
export PATH

Adding to All Users' Paths (except root)

You globally set a path in /etc/profile. That setting is global for all users except user root. Typical /etc/profile files extensively modify the $PATH variable, and then export that variable. What that means is you can modify the path by appending or prepending the desired directory(s) in existing statements modifying the path. Or, you can add your own path modification statements anywhere before the existing export statement. In the very unlikely event that there are no path modification or export statements in /etc/profile, you can insert the following 2 lines of code at the bottom of /etc/profile:

PATH=$PATH:/data/myscripts
export PATH

Adding to the Path of User root

User root is a special case, at least on Mandrake systems. Unlike other users, root is not affected by the path settings in /etc/profile. The reason is simple enough. User root's path is set from scratch by its .bash_profile script. In order to add to the path of user root, modify its .bash_profile.

Summary

A fundimental administration task is adding directories to the execution paths of one or more users. The basic code to do so is:
PATH=$PATH:/data/myscripts
export PATH

Place that code, or whatever part of that code isn't already incorporated, in one of the following places:

User Class
Script to modify
One user
$HOME/.bash_profile
All users except root
/etc/profile
root
/root/.bash_profile


By Steve Litt

http://www.troubleshooters.com/linux/prepostpath.htm

Friday, January 09, 2009

Simple yet powerful find command

finder-keepers.

In it's simplest use the find command searches for files in the current directory and its subdirectories:
$ find .

./tp1301.txt
./up1301.txt
./tp1302.txt
./up1302.txt
./Up1303.txt
./misc/uploads
./misc/uploads/patch12_13.diff

As always, the dot indicates the current directory. Here find has listed all files found in the current directory and its subdirectories.

If we only want to find files with 'up' at the start of their name, we use the '-name' argument.
So the following would be used:

$ find . -name up\*

./up1301.txt
./up1302.txt
./misc/uploads

find defaults to being case sensitive. If we want the find utility to locate the file 'Up1303.txt' we could either do 'find -name Up\*' or use the iname argument instead of the name argument.

The wildcard character is escaped with a slash so BASH sends a literal asterisk to the find utility as an argument instead of performing filename expansion and passing any number of files in as arguments.
This 'gotcha' is important. Be aware of the characters which the shell attaches special meaning to.

Now we know there are files that should have their names in lowercase we can utilise find to get a list of files with names that aren't:

$ find -iname up\* -not -name up\*

Smooth Operator

find supports boolean algebra with the -and, -or and -not arguments. These are abbreviated as -a, -o and ! (which in bash must be escaped as \!) respectively. The and operator is mentioned here for completeness. Its presence is implied:

$ find . -iname david\*gray\*ogg -type f > david_gray.m3u

These operators are processed in the following order:

Parentheses
Use parentheses to force the order in which the operators are evaluated.

-not
Invert the result of the tested expression.

-and
E.g. ex1 -and ex2; the second expression isn't checked if the first evaluated to true

-or
E.g. ex1 -or ex2; as with -AND, the second expression isn't checked if the first evaluated to true

','
This is the list operator where unlike the '-AND' and '-OR' operators both expressions are evaluated. Read the '2 into 1 does go' section for more information.

The example in the Smooth Operator boxout creates an m3u playlist listing all ogg files that start 'David Gray' (and all case-permutations)

$ find . -iname david\ gray\*ogg -type f > david_gray.m3u

This will find any files called, in one way or the other, "david gray....ogg".

This is semantically equivalent to:

$ find . -iname david\ gray\*ogg -and -type f > david_gray.m3u

It's equivalent to:

$ find . -iname "david gray*ogg" -and -type f > david_gray.m3u

What if the ogg files themselves mightn't have the artists name in them and are in some subdirectory of one called 'David Gray', how do we find them?

$ find . -ipath \*david\ gray\*ogg -type f > david_gray.m3u

The expression starts with a wildcard because its possible there's more than one subdirectory named 'david gray' that might really be nothing more than symlinks for categorisations.

Here's another example, we list the contents of the humour directory (one line per file) and do a case-insensitive search for .mp3 files with 'yoda' in the name of the file:

$ ls humour -1

Weird Al - Yoda.mp3
welcome_to_the_internet_helpdesk.mp3
werid al - livin' la vida yoda.mp3

$ find -ipath \*humour\*yoda\* -type f
./humour/Weird Al - Yoda.mp3
./humour/werid al - livin' la vida yoda.mp3

2 into 1 does go

As implied in the Smooth Operator boxout, it's possible to have one invocation of find perform more than one task.

To compile two lists, one containing the names of all .php files and the other the names of all .js files use:

$ find ~ -type f \( -name \*.php -fprint php_files ,

-name \*.js -fprint javascript_files \)

Pruning

Suppose you have a playlist file listing all David Gray .ogg files but there are a few albums you don't want included.
You can prevent those albums from going into the playlist by using the -prune action which works by attempting to match the names of directories against the given expression.
This example excludes the Flesh and Lost Songs albums :

$ find   \( -path  ./mp3/David_Gray/Flesh\* -o -path

"./mp3/David_Gray/Lost Songs" \* \) -prune -o -ipath \*david\ gray\*
The first thing you'll notice here is the parentheses are escaped out so BASH doesn't misinterpret them. Notice using -prune takes the form

"don't look for these, look for these other ones instead". ie:

$ find (-path  -o -path )

\-prune -o -path

It might take a bit longer to invoke find to use the -prune action: decide exactly what you want to do first. I find using the -prune action saves me time I can use on other tasks.

Fussy Fozzy!

There's a host of other expressions and criteria that can be used with find.

Here is a brief rundown on the ones you'll most likely want to use:
-nouser file is owned by someone no longer listed in /etc/passwd
-nogroupthe group the file belongs to is no longer listed in /etc/groups
-owner file is owned by specified user.

We'll delve into using these, and others, later on.

Print me the way you want me, baby!

Changing the output information

If you want more than just the names of the files displayed, find's -printf action lets you have just about any type of information displayed. Looking at the man page there is a startling array of options.
These are used the most:
%pfilename, including name(s) of directory the file is in
%mpermissions of file, displayed in octal.
%fdisplays the filename, no directory names are included
%gname of the group the file belongs to.
%hdisplay name of directory file is in, filename isn't included.
%uusername of the owner of the file

As an example:

$ find . -name \*.ogg -printf %f\\n
generates a list of the filenames of all .ogg files in and under the current directory.
The 'double backslash n' is important; '\n' indicates the start of a new line. The single backslash needs to be escaped by another one so the shell doesn't take it as one of its own.

Where to output information?


find has a set of actions that tell it to write the information to any file you wish. These are the -fprint, -fprint0 and -fprintf actions.

Thus

$ find . -iname david\ gray\*ogg -type f -fprint david_gray.m3u
is more efficient than
$ find . -iname david\ gray\*ogg -type f > david_gray.m3u

Execute!

File is an excellent tool for generating reports on basic information regarding files, but what if you want more than just reports? You could just pipe the output to some other utility:

$ find ~/oggs/ -iname \*.mp3 | xargs rm

This isn't all that efficient though.
It is much better to use the -exec action:

$ find ~/oggs/ -iname \*.mp3 -exec rm {} \;

It mightn't read as well, but it does mean the files are immediately deleted once found.
'{}' is a placeholder for the name of the file that has been found and as we want BASH to ignore the semicolon and pass it verbatim to find we have to escape it.

To be cautious, the -ok action can be used instead of -exec. The -ok action means you'll be asked for confirmation before the command is executed.

There are many ways these can be used in 'real life' situations:
If you are locked out from the default Mozilla profile, this will unlock you:

$ find ~/.mozilla -name lock -exec rm {} \;

To compress .log files on an individual basis:

$ find . -name \*.log -exec bzip {} \;

Give user ken ownership of files that aren't owned by any current user:

$ find . -nouser -exec chown ken {} \;

View all .dat files that are in the current directory with vim. Don't search any subdirectories.

$ vim -R `find . -name \*.dat -maxdepth 1`

Look for directories called CVS which are at least four levels below the current directory:

$ find -mindepth 4 -type d -name CVS 

Time waits for no-one

You might want to search for recently created files, or grep through the last 3 days worth of log files.

Find comes into its own here: it can limit the scope of the files found according to timestamps.

Now, suppose you want to see what hidden files in your home directory changed in the last 5 days:

$ find ~ -mtime -5 -name \.\*

If you know something has changed much more recently than that, say in the last 14 minutes, and want to know what it was there's the mmin argument:

$ find ~ -mmin 14 -name \.\*

Be aware that doing a 'ls' will affect the access time-stamps of the files shown by that action. If you do an ls to see what's in a directory and try the above to see what files were accessed in the last 14 minutes all files will be listed by find.

To locate files that have been modified since some arbitrary date use this little trick:

$ touch -d "13 may 2001 17:54:19" date_marker

$ find . -newer date_marker

To find files created before that date, use the cnewer and negation conditions:

$ find . \! -cnewer date_marker

To find a file which was modified yesterday, but less than 24 hours ago:

$ find . -daystart -atime 1 -maxdepth

The -daystart argument means the day starts at the actual beginning of the day, not 24 hours ago.
This argument has meaning for the -amin, -atime, -cmin, ctime, -mmin and -mtime options.

Finding files of a specific size

A file of character (bytes)

To locate files that have a certain amount of characters present then you can't go far wrong with

# find files with exactly 1000 characters

$ find . -size 1000c
#find files containing between 600 to 700 characters, inclusive.
$ find . -size +599c -and -size -701c
'Characters' is a misnomer: 'c' is find's shorthand for bytes; thus this will only work for ASCII text not Unicode.

Consulting the man page we see
c = bytes
w = 2 byte words
k = kilobytes
b = 512-byte blocks

Thus we can use find to list files of a certain size:

$ find /usr/bin -size 48k

Empty files

You can find empty files with $ find . -size 0c
Using the -empty argument is more efficient.

To delete empty files in the current directory:

$ find . -empty -maxdepth 1 -exec rm {} \;

Users & Groupies

Users

To locate files belonging to a certain user:
# find /etc -type f \!  -user root -exec ls -l {} \;

-rw------- 1 lp sys 19731 2002-08-23 15:04 /etc/cups/cupsd.conf
-rw------- 1 lp sys 97 2002-07-26 23:38 /etc/cups/printers.conf

A subset of that same information, without having the cost of an exec:

root@ttyp0[etc]# find /etc -type f \!  -user root \

-printf "%h/%f %u\\n"
/etc/cups/cupsd.conf lp
/etc/cups/printers.conf lp

If you know the uid and not the username then use the -uid argument:

$ find /usr/local/htdocs/www.linux.ie/ -uid 401

-nouser means there is no user in the /etc/passwd file for the files in question.

Groupies

find can locate files that belong to a specific group - or not, depending on how you use it.
This is especially suited to tracking down files that should belong to the www group but don't:

$ find /www/ilug/htdocs/  -type f \! -group  www

The -nogroup argument means there is no group in the /etc/group file for the files in question.
This may arise if a group is removed from the /etc/group file sometime after it's been used.
To search for files by the numerical group ID use the -gid argument:

$ find -gid 100

Permissions

If you've ever had one or more shell scripts not work because their execute bits weren't set and want to sort things out for once and for all, then you should like this little example:

knoppix@ttyp1[bin]$ ls -l ~/bin/

total 8
-rwxr-xr-x 1 knoppix knoppix 21 2004-01-20 21:42 wl
-rw-r--r-- 1 knoppix knoppix 21 2004-01-20 21:47 ww

knoppix@ttyp1[bin]$ find ~/bin/ -maxdepth 1 -perm 644 -type f \
-not -name .\*
/home/knoppix/bin/ww

Find locates the file that isn't set to execute, as we can see from the output of ls.

Types of files

The '-type' argument obviously specifies what type of file find is to go looking for (remember in Linux absolutely everything is represented as some type of file).
So far I've been using '-type f' which means search for normal files.

If we want to locate directories with '_of_' in their name we'd use:

$ find . -type d -name '*_of_*'

The list generated by this won't include symbolic links to directories.
To get a list including directories and symbolic links:

$ find . \( -type d -or -type l \) -name '*_of_*'

For a complete list of types check the man page.

Regular expressions

Thus far we've been using casual wildcards to specify certain groups of files. Find also support regular expressions, so we can use more advanced criteria with regards to locating files. The matching expression must apply to the entire path:

ken@gemmell:/home/library/code$ find . -regex '.*/mp[0-4].*'

./library/sql/mp3_genre_types.sql

The -regex test has a case insensitive counterpart, -iregex.

There is a little gotcha with using regular expressions: You must allow for the full path of the files found, even if find is to search the current directory:

$ cd /usr/share/doc/samba-doc/htmldocs/using_samba

$ find . -regex './ch0[1-2]_0[1-3].*'
./ch01_01.html
./ch01_02.html
./ch02_01.html
./ch02_02.html
./ch02_03.html

Limiting by filesytem

As an experiment, get a MS formatted floppy disk and mount it as root:

$ su -

# mount /floppy
# mount
/dev/sda2 on / type ext2 (rw,errors=remount-ro)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/fd0 on /floppy type msdos (rw,noexec,nosuid,nodev)

Now try

$ find / -fstype msdos -maxdepth 1 
You should see only /floppy listed.
To get the reverse of this, ie a listing of directories that are not on msdos file-systems, use
$ find / -maxdepth 1 \( -fstype msdos \) -prune -or -print
This is a start on limiting the files found by system type.

Summary

I've covered the vast majority of ways to use the find utility, but not absolutely everything. If you've any questions please don't hesitate in emailing me.

Ken Guest.

Monday, July 07, 2008

Using the System File Checker Tool

The System File Checker (SFC) tool is a command-line tool that can be used to restore protected system files on your computer by using the backup versions that are stored in the Dllcache folder, or files copied from the Windows XP installation source.

Protected file types include those with .sys, .dll, .exe, .ttf, .fon and .ocx file name extensions.

You must be logged on as an administrator or as a member of the Administrators group to be allowed to run System File Checker.

System File Checker Tool Syntax

  • /Scannow: Scans all protected system files immediately and replaces incorrect versions with correct Microsoft versions. This command may require access to the Windows installation source files.
  • /Scanonce: Scans all protected system files one time when you restart your computer. This command may require access to the Windows installation source files when you restart the computer.
  • /Scanboot: Scans all protected system files every time you start your computer. This command may require access to the Windows installation source files every time you start your computer.
  • /Revert: Returns SFC to the default setting (do not scan protected files when you start the computer). The default cache size is not reset when you run this command.
  • /Purgecache: Purges the file cache and scans all protected system files immediately. This command may require access to the Windows installation source files.
  • /Cachesize=x: Sets the file cache size to x megabytes (MB). The default size of the cache is 50 MB. This command requires you to restart the computer, and then run the /purgecache command to adjust the size of the on-disk cache.

To start using SFC, go to Start > Run, and type cmd in the Open box, then click OK to open a command prompt. Here you can using the command sfc with any of the switches indicated above (most of the time you'll be using sfc /scannow (note the space after sfc).

When you start SFC, you may see the following prompt several times during the process:

System File Checker

What you can do to eliminate this is to copy the I386 folder from your Windows XP CD to your hard drive. Just copy the whole folder to your hard drive. Note that it'll take some 500 MB in size, but with today's large hard drives this shouldn't be a problem. If you didn't get a Windows CD when you purchased your computer, it is likely that this folder will already be on your hard drive.

The next step is to let Windows know where to find the files. Follow these steps:

  1. Start the Registry Editor
  2. Go to HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows \ CurrentVersion \ Setup
  3. Double click the value SourcePath in the right pane, and enter the location where you copied the I386 folder (probably you copied the folder in the root of your C drive, thus the value would be C:\.
  4. Close the registry editor, and log off from Windows, or restart your computer for the setting to take effect.

Windows will keep track of updated system files that are introduced through the "normal" channels, such as Windows Update, Windows Service Pack installation using Update.exe, Hotfixes installed using Hotfix.exe or Update.exe and Operating system upgrades using Winnt32.exe.

(http://www.helpwithwindows.com/WindowsXP/howto-24.htm)

Tuesday, May 20, 2008

How To Communicate Like A Pro

Here are six techniques you can use to help you say things simply but persuasively, and even forcefully:

(1) Get your thinking straight. The most common source of confusing messages is muddled thinking. We have an idea we haven't thought through. Or we have so much we want to say that we can't possibly say it. Or we have an opinion that is so strong we can't keep it in. As a result, we are ill prepared when we speak, and we confuse everyone. The first rule of plain talk, then, is to think before you say anything. Organize your thoughts.

(2) Say what you mean. Say exactly what you mean.

(3) Get to the point. Effective communicators don't beat around the bush. If you want someone to buy something, ask for the order. If you want someone to do something, say exactly what you want done.

(4) Be concise. Don't waste words. Confusion grows in direct proportion to the number of words used. Speak plainly and briefly, using the shortest, most familiar words.

(5) Be real. Each of us has a personality -- a blending of traits, thought patterns and mannerisms -- which can aid us in communicating clearly. For maximum clarity, be natural, and let the real you come through. You'll be more convincing and much more comfortable.

(6) Speak in images. The clich? that "a picture is worth a thousand words" isn't exactly true (try explaining the Internal Revenue code using nothing but pictures). But words that help people visualize concepts can be tremendous aids in communicating a message. Once Ronald Reagan's Strategic Defense Initiative became known as Star Wars, its opponents had a powerful weapon against it. The name gave it the image of a far-out, futuristic dream beyond the reach of current technology. Reagan was never able to come up with a more powerful positive image.

Your one-on-one communication will acquire real power if you learn to send messages that are simple, clear, and assertive; if you learn to monitor the hearer to determine that your message was accurately received; and if you learn to obtain the desired response by approaching people with due regard for their behavioral styles.

Your finesse as a communicator will grow as you learn to identify and overcome the obstacles to communication. Practice the six techniques I just mentioned, and you'll find your effectiveness as a message-sender growing steadily.

But sending messages is only half the process of communicating. To be a truly accomplished communicator, you must also cultivate the art of listening.

If you're approaching a railroad crossing around a blind curve, you can send a message with your car horn. But that's not the most important part of your communication task. The communication that counts takes place when you stop, look and listen.

We're all familiar with the warning on the signs at railroad crossings: Stop, Look and Listen. It's also a useful admonition for communication.

It's easy to think of communication as a process of sending messages. But sending is only half the process. Receiving is the other half. So at the appropriate time, we have to stop sending and prepare to receive.

A sign on the wall of Lyndon Johnson's Senate office put it in a down-to-earth way: "When you're talking, you ain't learning."

Listening Pays

Listening pays off daily in the world of business. Smart salespeople have learned that you can talk your way out of a sale, but you can listen your way into one. They listen to their customers to find out what their needs are, then concentrate on filling those needs. Skilled negotiators know that no progress can be made until they have heard and understood what the other side wants.

Listening Requires Thought and Care

Listening, like speaking and writing, requires thought and care. If you don't concentrate on listening, you won't learn much, and you won't remember much of what you learn.

Some experts claim that professionals earn between 40% and 80% of their pay by listening. Yet, most of us retain only 25% of what we hear. If you can increase your retention and your comprehension, you can increase your effectiveness in the 21st century's Age of Information.

Listen With Your Eyes

If you listen only with your ears, you're missing out on much of the message. Good listeners keep their eyes open while listening.

Look for feelings. The face is an eloquent communication medium. Learn to read its messages. While the speaker is delivering a verbal message, the face can be saying, "I'm serious," "Just kidding," "It pains me to be telling you this," or "This gives me great pleasure."

Some non-verbal signals to watch for:

- Rubbing one eye. When you hear "I guess you're right," and the speaker is rubbing one eye, guess again. Rubbing one eye often is a signal that the speaker is having trouble inwardly accepting something.

- Tapping feet. When a statement is accompanied by foot-tapping, it usually indicates a lack of confidence in what is being said.

- Rubbing fingers. When you see the thumb and forefinger rubbing together, it often means that the speaker is holding something back.

- Staring and blinking. If you've made your best offer and the other person stares at the ceiling and blinks rapidly, your offer is under consideration.

- Crooked smiles. Most genuine smiles are symmetrical. And most facial expressions are fleeting. If a smile is noticeably crooked, you're probably looking at a fake smile.

- Eyes that avoid contact. Poor eye contact can be a sign of low self-esteem, but it can also indicate that the speaker is not being truthful.

It would be unwise to make a decision based solely on these visible signals. But they can give you valuable tips on the kind of questions to ask and the kind of answers to be alert for.

Good Listeners Make Things Easy

People who are poor listeners will find few who are willing to come to them with useful information.

Good listeners make it easy on those to whom they want to listen. They make it clear that they're interested in what the other person has to say.

Nido Qubein is president of High Point University, chairman of an international consulting firm, and chairman of Great Harvest Bread Co. with 218 stores in 41 states. He is one of America's foremost experts and speakers on communication, business management, leadership, and success. His many books and audio programs have been translated into nearly two dozen languages and are sold worldwide. For a complete library of free articles, self-evaluation quizzes, and a learning resource center, please visit http://www.nidoqubein.com.

By: Nido Qubein



Sunday, February 03, 2008

Website Design and Development Cycle

This is website design and development cycle of g2blue for my personal review.



















Website Design Preparation

Appointment of a dedicated Account Manager

We at G2Blue always appoint a dedicated Account Manager. You will have a guaranteed point of contact to enable the efficient and smooth running of any website design or development project we carry out for you.

Understanding of the client's business needs and aspirations through research and one to one meetings

Before we even meet with you we will make every effort to gain an understanding of your business as a whole. What you provide, who you provide it for and how you provide it.

When we meet with you we take time to establish where you would like your business to be, when you want it to be there and what you believe is currently stopping you being there.

Analysis of your existing advertising and branding media resources

We need to bring together your business cards, sign writing, letterheads, newspaper advertising, exhibition stands and product packaging. All these things need to be taken into account when considering your website design and development.

Analysis of your immediate competition

With your help we will establish who competes along side you for business and ensure that we know and understand what your competitors are doing with their websites. This information will allow us to make sure that we provide you with a superior product.

Website Design Conception

Scrapbook design through meetings and examples

Website experiences and design examples are used to complete a rough picture of what you want the website design to look like, and what sort of user experience you expect. We will often ask you for examples of other websites you like the look of and enjoy using.

Multiple website design ideas, discussion and design decision

We will then take all the initial ideas together with your current branding and marketing material to formulate a group of ideas that articulate your company's objectives.

Anchor points discussed and created

We at G2Blue make a point of identifying key anchor points within your website. These areas form critical points of reference for the whole design process and are considered the premier traffic pages.

Refining these ideas will identify the strengths and weaknesses of your website and online strategy. Discussions will run parallel to our analysis of your online business policy, realising your target audience and their requirements.

Information architecture methodology

G2Blue applies 'Information Architecture' (IA) methods to your website design. This is the process of organising and presenting information to help the user find and manage information more successfully.

Please see our Information Architecture datasheet for more details.

The understanding of your information sets the foundations for a good user experience. We will use this to ensure that every user to your site can achieve their goals quickly and simply.

Search engine optimisation

We will optimise and build your website right from the conception phase with high search engine rankings in mind. As we are specialists in search engine optimisation, we will build your site so that it has the best possible chance of a strong presence on the major search engines and directories. Effective search engine optimisation strategies adopted at this early stage in website design and development can significantly result in lower search engine optimisation charges later on.

Specification, site maps and project plan

A written specification is drawn up detailing every page within the website. This helps us to determine the way users are guided and navigated throughout the site, and allows you to clearly see how each aspect of the website is connected to another. Once the specification has been agreed upon, sitemaps are drawn up showing paths through the site that the user might take.

These are comprehensive visual representations of the different features of your site. Quite often alterations are made to the specifications once you have a more visual image through viewing the site map.

Once the written specification and sitemaps have been finalised we can accurately judge the production timescales required to make the site.

G2Blue will produce a project plan that details each task and the time required for building & testing.

There will be milestones within the time line identifying important project dates to act as our progress guide throughout the lifecycle of the project.

Actual production time on the site will begin when all three of these documents have been mutually agreed upon and signed off.

Mutual responsibility, timescales and contract agreed

Contracts will be agreed and signed, they will identify each parties responsibilities from conception to the time when the website appears live on the World Wide Web.

Website Creation

Design update meetings and milestone signoff

Throughout the whole project we will hold regular update meetings. These will comprise of frank and honest discussions concerning the progress of your website design. It is our experience that website development time is significantly reduced by holding update meetings and we recommend that you allocate to us a Project Manager within your company who has responsibility over key decisions to avoid project slip.

Progression viewable online

Website Testing

Technology integration testing

Unlike other website development companies, we will make every effort to make sure that the website we create for you works in any Internet browser available.

This may increase the length of design time required but will ensure that the website has completely seamless online presence.

User interface testing

A comprehensive testing period is always allowed within a website project. We test your website page by page and also allow for at least a week for you to test the site independently.

Website Implementation & Launch

Website live

At a mutually convenient time we will upload your website to the agreed location. Please note that our in-house web servers provide a secure and fast hosting solution for your website.

Training

When we develop a website that requires interaction by your staff we will ensure that all employees concerned are trained properly. There needs to be total understanding and a way for all users to feel confident with the new website. In addition, we know from previous experience that a website needs to be continuously promoted by your employees on a regular basis and as such we recommend a quarterly meeting with your employees accordingly.


Website Review & Support

Review meetings

G2Blue believes that what happens after the delivery of a website is equally as important to all that takes place beforehand. We will agree regular meetings with you to ensure complete satisfaction and understanding of ongoing needs.

Direct query logging with guaranteed response

All of our website designs incorporate a query logging facility that can be used by clients to log queries direct to our designers and developers. We will respond, guaranteed.

All of our account managers have direct telephone numbers that are issued to clients as well as the mandatory email address.

Software Development Life Cycle (SDLC)

Summary: As in any other engineering discipline, software engineering also has some structured models for software development. This document will provide you with a generic overview about different software development methodologies adopted by contemporary software firms. Read on to know more about the Software Development Life Cycle (SDLC) in detail.

Curtain Raiser

Like any other set of engineering products, software products are also oriented towards the customer. It is either market driven or it drives the market. Customer Satisfaction was the buzzword of the 80's. Customer Delight is today's buzzword and Customer Ecstasy is the buzzword of the new millennium. Products that are not customer or user friendly have no place in the market although they are engineered using the best technology. The interface of the product is as crucial as the internal technology of the product.

Market Research

A market study is made to identify a potential customer's need. This process is also known as market research. Here, the already existing need and the possible and potential needs that are available in a segment of the society are studied carefully. The market study is done based on a lot of assumptions. Assumptions are the crucial factors in the development or inception of a product's development. Unrealistic assumptions can cause a nosedive in the entire venture. Though assumptions are abstract, there should be a move to develop tangible assumptions to come up with a successful product.

Research and Development

Once the Market Research is carried out, the customer's need is given to the Research & Development division (R&D) to conceptualize a cost-effective system that could potentially solve the customer's needs in a manner that is better than the one adopted by the competitors at present. Once the conceptual system is developed and tested in a hypothetical environment, the development team takes control of it. The development team adopts one of the software development methodologies that is given below, develops the proposed system, and gives it to the customer.

The Sales & Marketing division starts selling the software to the available customers and simultaneously works to develop a niche segment that could potentially buy the software. In addition, the division also passes the feedback from the customers to the developers and the R&D division to make possible value additions to the product.

While developing a software, the company outsources the non-core activities to other companies who specialize in those activities. This accelerates the software development process largely. Some companies work on tie-ups to bring out a highly matured product in a short period.

Popular Software Development Models

The following are some basic popular models that are adopted by many software development firms

A. System Development Life Cycle (SDLC) Model
B. Prototyping Model
C. Rapid Application Development Model
D. Component Assembly Model

A. System Development Life Cycle (SDLC) Model

This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This model has the following activities.

1. System/Information Engineering and Modeling

As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when the software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system.

2. Software Requirement Analysis

This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focussed specially on software. To understand the nature of the program(s) to be built, the system engineer or "Analyst" must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved .

3. System Analysis and Design

In this phase, the software development process, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc... are all defined in this phase. A software development model is thus created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase.

4. Code Generation

The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen.

5. Testing

Once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations.

6. Maintenance

The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.

B. Prototyping Model

This is a cyclic version of the linear model. In this model, once the requirement analysis is done and the design for a prototype is made, the development process gets started. Once the prototype is created, it is given to the customer for evaluation. The customer tests the package and gives his/her feed back to the developer who refines the product according to the customer's exact expectation. After a finite number of iterations, the final software package is given to the customer. In this methodology, the software is evolved as a result of periodic shuttling of information between the customer and developer. This is the most popular development model in the contemporary IT industry. Most of the successful software products have been developed using this model - as it is very difficult (even for a whiz kid!) to comprehend all the requirements of a customer in one shot. There are many variations of this model skewed with respect to the project management styles of the companies. New versions of a software product evolve as a result of prototyping.

C. Rapid Application Development (RAD) Model

The RAD modelis a linear sequential software development process that emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases:

1. Business modeling

The information flow among business functions is modeled in a way that answers the following questions:

What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?

2. Data modeling

The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristic (called attributes) of each object is identified and the relationships between these objects are defined.

3. Process modeling

The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing the descriptions are created for adding, modifying, deleting, or retrieving a data object.

4. Application generation

The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc... rather than creating software using conventional third generation programming languages. The RAD model works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software.

5. Testing and turnover

Since the RAD process emphasizes reuse, many of the program components have already been tested. This minimizes the testing and development time.

D. Component Assembly Model

Object technologies provide the technical framework for a component-based process model for software engineering. The object oriented paradigm emphasizes the creation of classes that encapsulate both data and the algorithm that are used to manipulate the data. If properly designed and implemented, object oriented classes are reusable across different applicationsand computer based system architectures. Component Assembly Model leads to software reusability. The integration/assembly of the already existing software components accelerate the development process. Nowadays many component libraries are available on the Internet. If the right components are chosen, the integration aspect is made much simpler.



Conclusion


All these different software development models have their own advantages and disadvantages. Nevertheless, in the contemporary commercial software evelopment world, the fusion of all these methodologies is incorporated. Timing is very crucial in software development. If a delay happens in the development phase, the market could be taken over by the competitor. Also if a 'bug' filled product is launched in a short period of time (quicker than the competitors), it may affect the reputation of the company. So, there should be a tradeoff between the development time and the quality of the product. Customers don't expect a bug free product but they expect a user-friendly product. That results in Customer Ecstasy!

Stylusinc.com