Moving Android Studio from home to system folders

Android Studio takes up a lot of space with its SDKs and virtual devices. The home folder gets full quickly. One option would be to move the entire Android Studio suite to its own folder in one of the system folders. We will use /opt folder but /usr/local can also be used.  First, we create the directory structure at /opt/

$ su
$ cd /opt
$ mkdir Android

The /opt/Android folder will hold the Android Studio, the SDKs, and the AVDs.

$ mv /home/me/android-studio /opt/Android
$ mv /home/me/Sdk /opt/Android

The AVD folders are trickier as they are in the hidden folders in the users’ home directory.

$ cd /home/me/.android
$ mv avd /opt/Android

Now we need to link the moved folders so that Android Studio can access them.

1. Link the AVD folder as normal user

Open another terminal as the regular user

$ cd ~/.android
$ ln -s /opt/Android/avd avd

We then verify that the link points to the right directory

$ ls -l
lrwxrwxrwx 1 me mygroup 16 Oct 13 09:46 avd -> /opt/Android/avd/

2. Create a desktop file for Android Studio

This will allow the user to start Android Studio from the Ubuntu dash. Open terminal as a regular user and create a new file called android-studio.desktop

$ cd ~/.local/share/applications
$ gedit android-studio.desktop

We add these values to the file and save it.

[Desktop Entry]
Name=Android Studio
Exec=/opt/Android/android-studio/bin/studio.sh
Icon=/opt/Android/android-studio/bin/studio.png
Terminal=false
Type=Application
Categories=Utility;Application;

3. Link the SDK location from inside Android Studio

Open the Android Studio from dash (start typing android and it should pop up along with the Android Studio icon)

File -> Settings -> System Settings -> Android SDK

Set the path to the Android SDK location

If we have moved the Project directory then we will need to open one project manually and Android Studio will automatically detect the new location of the projects.

Notice that Android Studio with all its packages takes upwards of 9 GB

$ cd /opt/Android/
$ du -sh
8.8G .

 

Swap columns of CSV file from Linux terminal

Swapping columns is an integral part of data analysis. And with GUI spreadsheet programs it is simply a four-step process. Suppose ColumnA and ColumnB need to be swapped. Then the following sequence does the job.

  1. Create a new column before ColumnA
  2. Cut ColumnB into this new column
  3. Cut ColumnA to the location of ColumnB
  4. Delete empty column

However, for massive databases, the spreadsheet program is neither adequate nor recommended. The software will take a long time to load the file, maybe even stall in the process of loading the large database. A simpler solution will be to use AWK to swap the columns of the database. This method is extremely fast and efficient. A typical AWK command to rearrange the columns of a database will look like

awk -F ',' 'BEGIN{OFS=",";} {print $1, $5, $3, $4, $2}' test.csv

This command rearranges column 2 with column 8. This command is simple and elegant. But it has its drawbacks. The user needs to type all the column numbers by hand, which will become inefficient as the number of columns increases. A huge database might have more than 50 columns. It is very inefficient to type all column numbers by hand. Another disadvantage of manual entry is that the possibility of error is high while ordering the columns. The solution will be writing a shell script which achieves the same result with an AWK command but is more user-friendly.

#!/bin/bash
# swapcolumns v2.1 - this script swaps two columns of a csv file
# Usage: ./swapcolumns [inputfilename.csv] [outputfilename.csv] [delimiter]

# Input file test
if [ -z $1 ]; then
echo "Input filename absent"
read -p 'Enter input filename: ' inputfilename
else
inputfilename=$1
fi 
# check if file exists
if [ ! -f $inputfilename ]; then
echo "File" $inputfilename "does not exist. Exiting."
exit
fi

# Output file test
if [ -z $2 ]; then
echo "Output filename absent"
read -p 'Enter output filename: ' outputfilename
else
outputfilename=$2
fi
# check if file exists
if [ ! -f $inputfilename ]; then
echo "File " $inputfilename " does not exist. Exiting."
exit
fi

detecteddelimiter=`head -n1 $inputfilename | tr -d '[a-z][A-Z][0-9]' | sed 's/.\{1\}/&\n/g' | sort -r | uniq -c | sort -nr | tr -s " " | cut -d" " -f3 | head -n1`
# Find column delimiter
if [ -z $3 ]; then
echo "Delimiter detected: " $detecteddelimiter
read -p 'Press enter to accept this as the delimiter or enter one: ' delimiter
if [ -z $delimiter ];then
delimiter=$detecteddelimiter
fi
else
delimiter=$3
fi

head $inputfilename -n1 | tr $delimiter '\n' | nl
echo

read -p 'Enter column 1: ' col1
read -p 'Enter column 2: ' col2

columnlength=`head $inputfilename -n1 | tr ',' '\n' | wc -l`

awkstr=""

for i in `seq 1 $columnlength`;
do
 if (( $i == $columnlength )); then
 if (( $i == $col1 )); then
awkstr=$awkstr" $"$col2
elif (( $i == $col2 )); then
awkstr=$awkstr" $"$col1
else
awkstr=$awkstr" $"$i
fi
 else
 if (( $i == $col1 )); then
awkstr=$awkstr" $"$col2","
elif (( $i == $col2 )); then
awkstr=$awkstr" $"$col1","
else
awkstr=$awkstr" $"$i","
fi
 fi
done

# '"$variable"' - the double quotes expands the variable to its value while 
# single quotes allows AWK to interpreting it as an argument to print
# "'"$delimiter"'" requires an extra "" around it so that it is interpreted 
# as OFS="," and not as OFS=,

awk -F ${delimiter} 'BEGIN{OFS = "'"$delimiter"'" ;} {print '"$awkstr"' }' $inputfilename > $outputfilename
echo "Output written to" $outputfilename

After the usual checks for missing filenames, the script extracts the column headers using head (which outputs the first part of files) and replaces the column delimiter with a newline using tr.  This produces a list of column headers. An nl command numbers the lines and makes it easier for the user to choose the columns. It then reads the input stream for column numbers using the read command. Once the user input is loaded, the script uses a similar one-liner to extract the number of columns and runs through a loop which generates a string of column numbers in the order specified by the user (and stores in the variable awkstr). It checks for the special case when one of the swapped columns is the last column of the database and avoids appending a separating character (comma) for that case.

The AWK command needs some explanation. The ‘”$awkstr”‘ variable, which holds the string meant to generate the reordering of the columns, needs to be expanded inside the AWK command and at the same time interpreted as an argument to print. Double quotes around a bash variable expands its value while single quotes allow string literal interpretation by the print command of AWK. The quotes around the variable for the delimiter work in the same manner except there needs to be an extra double quote around it so that it is interpreted as OFS=”,” and not as OFS=, (absence of quotes generates AWK error).

To implement this code, let’s say we want to rearrange the columns 1 and 2 on a test file displayed below. The test CSV file has three columns – Test, NumA, and NumB.

$ cat test.csv 
Test,NumA,NumB
A,2,9
B,2,9
C,6,17
D,1,7
E,3,11
F,3,11
G,6,17
H,5,15

When the script is run on this test data, the script automatically detects the delimiter and displays a numbered list of the columns. It then asks the user to choose the column numbers which needs to be swapped. We have appended the output of the script stored in the file out.csv.

$ ./swapcolumns.sh test.csv out.csv 
Delimiter detected: ,
Press enter to accept this as the delimiter or enter one: 
1 Test
2 NumA
3 NumB

Enter col1: 1
Enter col2: 2

$ cat out.csv 
NumA,Test,NumB
2,A,9
2,B,9
6,C,17
1,D,7
3,E,11
3,F,11
6,G,17
5,H,15

The ordering of column numbers is immaterial.

$ ./swapcolumns.sh test.csv out.csv 
Delimiter detected: ,
Press enter to accept this as the delimiter or enter one: 
1 Test
2 NumA
3 NumB

Enter col1: 2
Enter col2: 1

cat out.csv 
NumA,Test,NumB
2,A,9
2,B,9
6,C,17
1,D,7
3,E,11
3,F,11
6,G,17
5,H,15

The automatic detection of the delimiter (see variable $detecteddelimiter in the above code) is a very useful piece of code (albeit a long one) and is discussed in a subsequent post on identifying delimiter of a CSV file.

When one of the swapped columns is the last column of the database it is a special case, which is handled in the script by the first if statement inside the for loop.

$ ./swapcolumns.sh test.csv 
test.csv
1 Test
2 NumA
3 NumB

Enter col1: 2
Enter col2: 3
Test,NumB,NumA
A,9,2
B,9,2
C,17,6
D,7,1
E,11,3
F,11,3
G,17,6
H,15,5

The scripts’ error handlers handle wrong, misspelled, and non-existent filenames.

$ ./swapcolumns.sh 
Input filename absent
Enter input filename: junk.csv
File junk.csv does not exist. Exiting.

The script can be easily modified to remove columns instead of swapping them.

#!/bin/bash
# removecolumns v1.0 - this script removes columns from a csv file
# Usage: ./removecolumns [inputfilename.csv] [outputfilename.csv] [delimiter]

# Input file test
if [ -z $1 ]; then
echo "Input filename absent"
read -p 'Enter input filename: ' inputfilename
else
inputfilename=$1
fi 
# check if file exists
if [ ! -f $inputfilename ]; then
echo "File" $inputfilename "does not exist. Exiting."
exit
fi

# Output file test
if [ -z $2 ]; then
echo "Output filename absent"
read -p 'Enter output filename: ' outputfilename
else
outputfilename=$2
fi
# check if file exists
if [ ! -f $inputfilename ]; then
echo "File " $inputfilename " does not exist. Exiting."
exit
fi

detecteddelimiter=`head -n1 $inputfilename | tr -d '[a-z][A-Z][0-9]' | sed 's/.\{1\}/&\n/g' | sort -r | uniq -c | sort -nr | tr -s " " | cut -d" " -f3 | head -n1`
# Find column delimiter
if [ -z $3 ]; then
echo "Delimiter detected: " $detecteddelimiter
read -p 'Press enter to accept this as the delimiter or enter one: ' delimiter
if [ -z $delimiter ];then
delimiter=$detecteddelimiter
fi
else
delimiter=$3
fi

head $inputfilename -n1 | tr $delimiter '\n' | nl
echo

read -p 'Enter column to be removed: ' col1

columnlength=`head $inputfilename -n1 | tr ',' '\n' | wc -l`

awkstr=""

for i in `seq 1 $columnlength`;
do
 if (( $i == $columnlength )); then
 if (( $i != $col1 )); then
awkstr=$awkstr" $"$i
fi
 else
 if (( $i != $col1 )); then
awkstr=$awkstr" $"$i","
fi
 fi
done

# '"$variable"' - the double quotes expands the variable to its value while 
# single quotes allows AWK to interpreting it as an argument to print
# "'"$delimiter"'" requires an extra "" around it so that it is interpreted 
# as OFS="," and not as OFS=,

awk -F ${delimiter} 'BEGIN{OFS = "'"$delimiter"'" ;} {print '"$awkstr"' }' $inputfilename > $outputfilename
echo "Output written to" $outputfilename

 

 

List files with absolute pathname in Linux

ls -d $PWD/*

$PWD/* expands the absolute path of the present working directory and appends the directory listing of * to it.

ls displays that list while -d prevents ls from going into each directory in that list and recursively listing all sub-directories.

We can also print filelist of all sub-directories relative to current directory.

find . -type f

Convert filelist to an Excel database (importable ebook list)

Let us say we have a collection of ebooks or papers/articles sorted in various folders and we want to create a database (or spreadsheet) of those papers or books so that we can add comments or notes next to them.For example, let us say we have a file structure like (find . type f)

./entanglement-entropy-holography/1006.1263.pdf

./entanglement-entropy-holography/0912.1877.pdf

./entanglement-entropy-holography/0911.3160v2.pdf

./entanglement-entropy-holography/0912.1877v2.pdf

./entanglement-entropy-holography/1010.1682.pdf

./graviton-propagator/zee-1979-PhysRevLett.42.417.pdf
./graviton-propagator/dewitt-3-PhysRev.162.1239.pdf
./graviton-propagator/dewitt-2-PhysRev.162.1195.pdf
./graviton-propagator/dewitt-1-PhysRev.160.1113.pdf
./SUSY/Piguet-9710095v1.pdf
./SUSY/Olive_susy_9911307v1.pdf
./SUSY/sohnius-introducing-susy-1985.pdf
./SUSY/khare-cooper-susy-qm-phys.rept-1995.pdf
./SUSY/Instantons Versus Supersymmetry9902018v2.pdf
and we want this list to be converted to a database format.

 

Article Type Notes
1006.1263.pdf entanglement-entropy-holography
0912.1877.pdf entanglement-entropy-holography
0911.3160v2.pdf entanglement-entropy-holography
0912.1877v2.pdf entanglement-entropy-holography
1010.1682.pdf entanglement-entropy-holography
zee-1979-PhysRevLett.42.417.pdf graviton-propagator
dewitt-3-PhysRev.162.1239.pdf graviton-propagator Difficult
dewitt-2-PhysRev.162.1195.pdf graviton-propagator Difficult
dewitt-1-PhysRev.160.1113.pdf graviton-propagator Difficult
Piguet-9710095v1.pdf SUSY
Olive_susy_9911307v1.pdf SUSY
sohnius-introducing-susy-1985.pdf SUSY
khare-cooper-susy-qm-phys.rept-1995.pdf SUSY
Instantons Versus Supersymmetry9902018v2.pdf SUSY Random comment
The last column is added by the user after the data is imported. In order to import the data in the above format, we need the directory name (TYPE) and the FILENAME to be reversed and printed as columns separated by TAB. We can use any other delimiter but with TAB as the delimiter of columns, a spreadsheet program will automatically split the imported columns into two columns.

$ find . -type f -print | sed -r ‘s|(.*)\/|\1+|’  | awk -F”+” ‘{print $2″\t”$1}’ | sed ‘s|\.\/||’

The find command lists all files and pipes it to sed which then replaces the last forward slash (/) with a +. This replacement allows awk to operate on this location (+) and splice the string into two – the first part is the TYPE and the second part is the FILENAME. awk then switches the order of the fields TYPE and FILENAME and puts a TAB in between the fields. Now a simple copy-paste of the output to a spreadsheet program will automatically sort the two fields into two different columns.
Detailed explanation:
find . -type f 
selects only files recursively from all sub-directories
sed -r ‘s|(.*)\/|\1+|’

-r indicates REGEX(regular expression) to be used in pattern matching

| delimiter is used instead of the conventional / to avoid confusion while replacing the / in the strings.
(.*)\/ selects everything up to the last forward slash (/) (sed is a greedy pattern matcher).

(.*) is stored in \1 is put back while the forward slash (/) is replaced by +.

awk -F”+” ‘{print $2″\t”$1}’

-F sets the input field separator to be + so that awk can splice the input string at the location of the +, which is conveniently inserted at the location of the last forward slash (/) by the previous sed operation.
‘{print $2″\t”$1}’ prints column 2, TAB, and column 1 in that order, effectively interchanging the columns and inserting a TAB between them.
The output will look like this 

$ find . -type f -print | sed -r 's|(.*)\/|\1+|'  | awk -F"+" '{print $2"\t"$1}' | sed 's|\.\/||'

1006.1263.pdf entanglement-entropy-holography 
0912.1877.pdf entanglement-entropy-holography 
0911.3160v2.pdf entanglement-entropy-holography 
0912.1877v2.pdf entanglement-entropy-holography 
1010.1682.pdf entanglement-entropy-holography 
zee-1979-PhysRevLett.42.417.pdf graviton-propagator 
dewitt-3-PhysRev.162.1239.pdf graviton-propagator Difficult
dewitt-2-PhysRev.162.1195.pdf graviton-propagator Difficult
dewitt-1-PhysRev.160.1113.pdf graviton-propagator Difficult
Piguet-9710095v1.pdf SUSY 
Olive_susy_9911307v1.pdf SUSY 
sohnius-introducing-susy-1985.pdf SUSY 
khare-cooper-susy-qm-phys.rept-1995.pdf SUSY 
Instantons Versus Supersymmetry9902018v2.pdf SUSY

 

Identifying delimiter of a CSV file

The following one-liner can be used to extract the delimiter of a CSV file. This command does not work on TAB separated files. It only works on delimited files whose field separators are NOT whitespaces.

$ head -n1 bookmerged.csv  | tr -d '[a-z][A-Z][0-9]' | \
tr -d '"' | sed 's/.\{1\}/&\n/g' | sort -r | uniq -c | \
sort -nr | tr -s " " | cut -d" " -f3 | head -n1

This command generates a list of special characters and from that list selects the character with the highest frequency of occurrence. This character must be the delimiter of the file unless some other special character is used heavily. This code will fail when other special characters have a higher frequency of occurrence than the delimiter. An explanation of this code is as follows.

After head grabs the column headers, the first two trace commands (tr) removes all alphabets, numbers, and quotes. This leaves a bunch of special characters among which the character with the highest frequency of occurrence is most likely the delimiters of the fields.

,,,,,   , ,, , , ,,, ,, , ,/ , , , 

The sed command introduces a newline after every character effectively putting every single character on a new line. {1} selects one character at a time, \{ escapes the character {, and & substitutes the pattern match (the single character) with pattern+newline. We can also use \0 instead of &. sort -r | uniq -c | sort -nr generates the list of characters in descending order of prevalence.

     20 ,
     14  
      1 /
      1 

The most prevalent character appears at the top of this list. tr -s ” “ combines (squeezes) the multiple spaces into one and the cut command splices up the list along the spaces and selects the third column which is the delimiter.