Private:Technical

From NMSL

Java

If you are using the Ant Eclipse plug-in and get the following error message

[javac] BUILD FAILED: file:C:/[ECLIPSE_DIR]/workspace/[PROJECT_DIR]/build.xml:32: Unable to find a javac compiler;
com.sun.tools.javac.Main is not on the classpath.
Perhaps JAVA_HOME does not point to the JDK

you are using the wrong Java Virtual Machine (JVM) with Eclipse. Ant goes ahead and uses the javac from the JVM Eclipse is using no matter what you put in the compiler attribute. This is bad because Eclipse uses the first Java VM it finds on your computer's PATH variable.

So you have to tell Eclipse to run using the SDK so that Ant can use the SDK. To do this make a shortcut to eclipse.exe and change the target to:

[ECLIPSE_DIR]\eclipse.exe -vm [SDK_DIR]\bin\javaw.exe

Where [ECLIPSE_DIR] and [SDK_DIR] are the full paths to the Eclipse and Java SDK directories respectively.

Note that for Linux, there's no corresponding executable to javaw.exe. You should use java instead.


C/C++

  • Dealing with: terminate called after throwing an instance of 'std::bad_alloc'


std::bad_alloc is an exception that gets thrown by 'new' when it can't allocate the memory you requested. Something (very likely the standard library containers if you are using them) is requesting memory, and that memory can't be allocated.

You should either change your memory usage patterns so that enough memory is available to satisfy all the requests, or catch bad_alloc and handle it in some appropriate way (or preferably, both).


  • Deleting Elements from STL Containers of Pointers

Imagine a vector of type std::vector<CObject*>. In this case, the vector does not manage CObjects, but only pointers to CObjects. This means that the vector will only manage the memory required to store the pointers. It does not, however, care about whatever the pointer points to. If you remove an element out of such a vector, only the pointer will be removed (and the space required to save the pointer will be freed). The object that is pointed to by this pointer is not affected by this. If you want that object to be deleted as well (not just the pointer being removed from the vector), it's your responsibility to delete it yourself.

This might be a bit confusing for people coming to STL from a Java or .NET background. In Java and .NET objects can only be accessed via references. When you have an array of objects (or a container of objects) in Java, you actually have an array of references to objects. In C++ this would be vector<object*>. There is not counterpart for C++'s vector<object> in Java.

The confusion is probably caused by the syntax. If you write "Object obj" in Java, you get a reference to an Object. If you write this in C++, you get an instance of Object; the counterpart of Java's "Object obj" in C++ would be "Object* obj".



SSH and SCP


Linux

  • Removing .svn directories in a checked out project (disconnecting the local copy from SVN)
find /path/to/destdir -name '.svn' -exec rm -r {} \;
  • Using the Linux top utility

To investigate the per-thread CPU usage on Linux, the recommended tool is top with the -H option, which provides an additional thread for information not provided by top in the default usage. The output of top -H on Linux shows the breakdown of the CPU usage on the machine by individual threads.

To enable or display stats press the number 1 and you should see individual CPU core stats.

Sometimes, we are only interested on several processes only, maybe just 4 or 5 of the whole existing processes. For example, if you want monitor process identifier (PID) 4360 and 4358, you type:

$ top -p 4360,4358

OR

$ top -p 4360 -p 4358

Seems easy, just use -p and list all the PIDs you need, each separated with comma or simply use -p multiple times coupled with the target PID.

Another possibility is just monitoring process with certain user identifier (UID). For this need, you can use -u or -U option. Assuming user "johndoe" has UID 500, you can type:

$ top -u johndoe

OR

$ top -u 500

OR

$ top -U johndoe

The conclusion is, you can either use the plain user name or the numeric UID.


A file descriptor is a data structure used by a program to get a handle on a file, the most well known being 0,1,2 for standard in, standard out, and standard error. You can set the open file descriptors limit each time you boot, for each user. If you are user X, you can see the current limit for you with:

ulimit -n

If you have not used the ulimit command in your user profile, .bashrc file or anything to tweak the maximum number of open file descriptors, then you should see the value of INR_OPEN as set in the includes/linux/fs.h file, that is: 1024.

You can determine how many open files you have using "lsof" command:

lsof -u chris

This will show you all files that are open by user chris. You can pipe the output of lsof to the wc (word count) command, to get the number of open files by user chris, as follows:

lsof -u chris | wc -l


Note that 'lsof' gives the number of open files. An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file. Though a file is open, it might not have a file descriptor associated with it - such as current working directories, memory mapped files and executable text files. 'lsof | wc -l' gives the current number of open files. Therefore, there will be a difference in the number of current open files and the number of current file descriptors/handles.

To increase the maximum number of file descriptors allowed for a user, just put the following in his profile file, or .bashrc.

ulimit -n 32768

Now, you have to be careful here: if you are the only user on your system (not counting users needed by some programs, like your Servlet container (Tomcat) or Web server (wwwrun) etc.) and if you want to be sure that you have enough file descriptors, then go with such a large number as 32768 above. But setting a limit that high for all users in your system, may lead to system degradation.

This doesn't mean you can set the limit as you please. The /etc/security/limits.conf file contains the absolute maximums a user can set for himself with the ulimit command. For the file descriptors, for example, you can set the "nofile" option as follows in my /etc/security/limits.conf:


*               soft    nofile          1024
*               hard    nofile          2048
root            soft    nofile          2048
root            hard    nofile          32768
chris           soft    nofile          1024
chris           hard    nofile          32768
tomcat          soft    nofile          2048
tomcat          hard    nofile          8192

User chris will start with a maximum of 1024 (that is the "soft" line), but if he likes, he can set it up to 32768 with a ulimit command (in his profile, .bashrc file, on the command line and so on). User tomcat will start with a maximum of 2048 (just to be sure all those webapps get what they need) and can even increase it with a ulimit command up to a hard limit of 8192.

But again, even if you are root and can edit the /etc/security/limits.conf file, there is a limit. Your "limit of limits" in this case is the value you got when you did at the start of this discussion.

cat /proc/sys/fs/file-max

This is what your kernel uses. To change the kernel parameter on the fly, you can "echo" the number you wish to /proc/sys/fs/file-max as follows (do it with caution please):

echo "300000" > /proc/sys/fs/file-max

Now test again to make sure that the new limit is set.

cat /proc/sys/fs/file-max


To know how many file descriptors are being used, do a

cat /proc/sys/fs/file-nr

You get an output like this.

8667        3145        288217
   |           |             |__ maximum number of file descriptors allowed on the system
   |           |     
   |           |__ total free allocated file descriptors
   |
   |__  total allocated file descriptors



This is made possible by \033 , the standard console "escape" code, which is equivalent to ^[ , or 0x1B in hex. When this character is received, the Linux console treats the subsequent characters as a series of commands. These commands can do any number of neat tricks, including changing text colors.

Here's the actual syntax:

\033 [ <command> m

(In practice, you can't have any spaces between the characters; I've just inserted them here for clarity).


Anything following the trailing "m" is considered to be text. It doesn't matter if you leave a space behind the "m" or not. So this is how you turn your text to a deep forest green:

echo -e "\033[32mRun, forest green, run."

Note that the "-e" argument to "echo" turns on the processing of backslash-escaped characters; without this, you'd just see the full string of text in gray, command characters and all. Finally, the command "0" turns off any colors or otherwise funkified text:

\033[0m

Without the "0" command, your output will continue to be processed, like so:

echo -e "\033[32mThis is green."

echo "And so is this."

echo "And this."

echo -e "\033[0mNow we're back to normal."

Running a command that uses console colors (e.g., ls) will also reset the console to the standard gray on black.


Programming Console Colors

Of course, escape sequences aren't limited to shell scripts and functions. Let's see how the same result can be achieved with C and Perl:

C:

printf("\033[34mThis is blue.\033[0m\n");

Perl:

print "\033[34mThis is blue.\033[0m\n";


Available Colors

Now, how do you know which codes do what? The first eight basic EGA colors are defined as follows:

30 black foreground 31 red foreground 32 green foreground 33 brown foreground 34 blue foreground 35 magenta (purple) foreground 36 cyan (light blue) foreground 37 gray foreground


So, if I wanted the word "ocean" to appear in light blue, I could type the following:

echo -e "The \033[36mocean\033[0m is deep."


Combining Commands

Multiple console codes can be issued simultaneously by using a semicolon (";"). One useful command is "1", which sets text to bold. The actual effect is a lighter shade of the chosen color. So, to get a light magenta (purple) as shown in the first example, you would do this:

echo -e "\033[35;1mCombining console codes\033[0m"

This bolding feature allows you to access the other half of the standard 16 EGA colors. Most notably, brown turns into yellow, and gray turns into bright white. The other six colors are just brighter versions of their base counterparts.


Backgrounds

Text backgrounds can also be set with console codes, allowing you to have white on top of red (for example). Here is the full list of available background options:

40 black background 41 red background 42 green background 43 brown background 44 blue background 45 magenta background 46 cyan background 47 white background


Finally, here are some other noteworthy command codes:

0 reset all attributes to their defaults 1 set bold 5 set blink 7 set reverse video 22 set normal intensity 25 blink off 27 reverse video off


Unfortunately, these techniques are limited to the console, as they don't display over telnet (unless the remote interface is also a Linux console).

Note that the codes given here are known as ECMA-48 compliant. That is, they work on systems other than Linux. (In case you're interested, ECMA is the European Computer Manufacturers Association, a standards body similar to the ISO). Any system with a VT-102 capable console can use the color codes demonstrated above.



For loop uses $IFS variable to determine what the field separators are. By default $IFS is set to the space character. To read an entire line as a single element, you should change the separator,

OLD_IFS=$IFS
IFS=$'\n'

''for loop goes here..''

IFS=$OLD_IFS



In short, something is wrong with if condition, it is expecting a single value but gets multiple values.



This can be done using either the groups command or the id command.



Use one of the greps (grep, egrep, fgrep). You want to use the -r "recursive" option to search a directory and all files included within it.

grep -r -i ONOCR /usr/include

[nsl@nsl]$ grep -r -i ONOCR /usr/include
/usr/include/asm/termbits.h:#define ONOCR       0000020
/usr/include/linux/cdk.h:#define        FO_ONOCR        0x8
/usr/include/linux/tty.h:#define O_ONOCR(tty)   _O_FLAG((tty),ONOCR)
/usr/include/bits/termios.h:#define ONOCR       0000020

The '-i' means ignore case. So if you only wanted it in all capitals you would not use it. When used it will find any combination of upper and lower case.


OS Compatibility Issues


LaTeX

FFMPEG

Given a series of BMP files, where each each frame is named from let's say 000 to 999, it is possible to create a video out of them:

ffmpeg.exe -f image2 -r 25 -s 1024x768 -i color-cam6-f%03d.bmp camera6.mpg   (http://ffmpeg.org/faq.html#SEC14)

where -r is for the rate and -s for the size.

You may need to stick to a certain order for the parameters when working under Linux. For example, to encode an H.264 video from a BMP sequence using libx264, the following can be used:

ffmpeg -i color-cam0-f%03d.bmp -vcodec libx264 -vpre medium -vpre main -f image2 -r 15 -psnr -qphist output.mp4

Notice that the input option "-i" comes first, followed by the parameters for the encoder.