17Nov/20

Create Github Repo On The Go From The Shell – Github API

Sometimes you start coding something and you don’t yet realize if it will be a project worth sharing on github. We don’t always think about this as we begin coding. This is a write up on how to deal with the time you code something up, and then – after awhile – you realize you want to share it. This method, also works for creating fresh new repos that you haven’t started coding yet.

Of course the normal practice is to create a github repo from the browser and then follow the code to git init it.

Thats all fine and nice, but its time consuming to open up the browser and create the github repo. Luckily, we can do it using the github api.

First you need to get a github API key. Then using curl you can send a command to github to create your repo for you. There are many settings you can tweak. Specifically for us we want the basic; public repo without any commits or files (no README.md files).

Step 1 – Create API Token

First you have to create an authentication token – personal token:

  • Login to github.com on your browser
  • Go to Settings -> Developer settings -> Create Personal Access Token
  • Hit Generate button
  • In the note textbox, write its purpose. ex: “creating repos from command line
  • Then give it proper access in the scopes:
    • Check on everything in the repo section
    • Check on gist
  • That is it. When you submit this info, it will give you an access token (long alphanumeric string)
  • Save that access token string. We will be using it in our curl commands. This token is the equivalent of providing a username and password (so don’t lose it and dont share it)

Step 2 – Command Line

After you get your key you can now use it in the shell. For example’s sake we use abc123 as the key (your key will have more characters).

Here is how it will look like in your workflow:

  • First, create directory and code some stuff
  • Realize you are making a repo
  • git init the repo. That only saves it locally
  • Make a commit
  • Now create the github repo using curl command. Note we set auto_init to false (by default its true) so that it doesn’t create a first commit with a template README.md file. Also, we make it a public repo, so we set private to true.
    • Change abc123 to your authorization token alphanumeric value
    • Change REPONAME to your repo name. Only use these chars alphabet, number, dot, underscore and minux: A-Za-z0-9_. -.
  • Then set the git origin, which is the remote repository server and repo. Make sure to use the https://github.com/USERNAME/REPONAME.git link; If you use wrong link remove with git remote remove origin. This link is seen in the curl output (look for “clone_url“)
    • Change USERNAME to your github username and REPONAME to your reponame
  • set the branch (master or main) to upstream and push

More Info:

  • More information on github api. Such as more options to pass in the json string with the -d argument to affect the type of repo that gets created: https://developer.github.com/v3/repos/#create-a-repository-for-the-authenticated-user
  • The simplest form of this call curl -H "Authorization: token abc123" https://api.github.com/user/repos -d '{"name":"REPONAME"}' would create a repo that has an initial commit. However, for a fluid work process we don’t want that, so we add the option auto_init: false (if not provided; this option is set to true). Also we set private to false, so that we get a public repo.
  • Previously, you could use the api without a token using your username and password; that has been deprecated out as its unsafe. the commands looked like this: curl -u user:pass https://api.github.com/user/repos -d '{"name":"REPONAME"}'

Making an alias for easier use

It might be annoying to always type those long commands. So you can write an alias and stick it in your .bashrc or .bashprofile. However, that alias is really long. I prefer to create an environment function – it’s a bash function that can be called from the shell; it’s just a regular bash function that was created in the shell instead of in a script.

Here is what I have in my bashrc/bashprofile:

Don’t forget to comment out whichever alias function you don’t want to use (ALIAS 1 or ALIAS 2). I personally use ALIAS 1 as I don’t like having extra files).

For ALIAS 1, don’t forget to put the Auth Token in the variable, thus replacing abc123.

For ALIAS 2, don’t forget to create file ~/.github-api-key with your key: echo "abc123" > ~/.github-api-key

If you just created the script don’t forget to source your .bashrc or .bashprofile (or whichever file you put the script in): source ~/.bashrc

Using the Alias / Function on the Go

Now finally you can use like this – just an example (you can use it other ways – like you can start with the CreateGitRepo command):

  • Create a directory Project1:
  • Change into the dir:
  • Code some stuff up:
  • Initialize the local git repo in the current dir:
  • Stage the current files (wow.js):
  • Commit the staged to the local repo with a descriptive comment:
  • Now create the empty public remote repo (in otherwords create the repo on github):
  • This will show you a lot of lines output if all worked correct.
  • So far the repo has been created but the code has not been pushed to it yet. the final 2 lines of CreateGitRepo command help you with the final 2 commands to set origin and push the code to github. for this example those 2 commands would look like this:

Create Private Git Repo

After using this function a bit, I realized having a private repo creator is just as useful. Here is the same alias and function made for private. The alias and function have an extra suffix to differentiate them. Copy and paste into your .bashrc or .zshrc.

Final Thoughts

All of these commands create a public repo. You can modify that by setting private to true (change "private": false to "private": true). If you want you can even create your own function for that.

The end

12Nov/20

Backup file[s] to Dropbox (without syncing)

This github page, https://github.com/andreafabrizi/Dropbox-Uploader, has the dropbox-uploader tool which I use to backup content from servers (Linux, Mac, etc) without having to sync my Dropbox content to the local disk on your server.

It uses the Dropbox API (docs here) . You can use the API pretty easily with curl commands, but for files over 150 MiB it gets complicated (chunking and such).

Before using the uploader, get an API key dropbox, Here is how you do that.

Note that there are 2 levels of access each API gets (which you configure):

  • Full access – allows the API to access your entire Dropbox (so if it fell into wrong hands it could access/write/delete into anything – even go as far as delete everything). The benefit is that a user can specify which directory in their Dropbox file system to backup files to.
  • Application access – This access limits the API to only the /Apps/<appname> directory (if the /Apps/<appname> directory is missing it will be created on the root of your Dropbox file system). So you can only backup content to /Apps/<appname>/. Within that directory the API has full control (well the access can be fine tuned)

For backup purposes just use the application level access, there is no reason for it to potentially have access to all of your Dropbox data.

More info on these access levels and Oauth authentication is here, https://www.dropbox.com/lp/developers/reference/oauth-guide

Steps by example:

First install dropbox_uploader.sh to any location of your choosing in your filesystem.

Now create a db.conf file. If you run dropbox_uploader.sh without the -f option, it will prompt you for your OAUTH KEY and create the conf file in its default location. I prefer putting it in a custom location.

echo "OAUTH_ACCESS_TOKEN=gvi4325ffFAKEKEYwadfasd-i234asdfa" > ~/backups/db.conf

Then if I want to copy a file called yourfile.txt, first I delete it from the destination. I do this to avoid possible time consuming hash-checks, because if the file already exists at the destination, this script does a hash check.


./dropbox_uploader.sh -f ~/backups/db.conf delete /yourfile.txt

Then I upload the file, by specifyin the source and destination location. Note the destination location must start with a forward slash. If your key have full access it will go to Dropbox:yourlocation. If your key has app access it will go to Dropbox:/Apps/appname/yourlocation. If you are unsure if the directories, exist, don’t worry the API creates the needed directories (even if they are a few nested ones).

./dropbox_uploader.sh -f ~/backups/db.conf upload ~/yourfile.txt /yourfile.txt

Cron

I use this tool to backup servers (data compressed with tar+xz => good high compression) to dropbox. In my backup script which is called by cron daily or however often, I do all of the above steps. I create the db.conf everytime, that way I can see all of the settings and commands from the backup script and don’t have to rummage thru my filesystem for my config file.

The end

09Nov/20

xargs parallel note to self

Xargs is useful to run in parallel. Its parallel processing is very efficient. Read this post about its efficiency and this one about basic commands.

Below, is my favorite way to run with a single-command method repeated (or parallelized):

Replace inputlist with anything. Each line of input list gets run once by command. In the command you can use {} if you need to call the line.

Below is a multi-command method:

Note: recommend to use single quotes on the outside as seen here

Note: you can redirect the output at the very left outside of the single quotes, this way each run’s output is saved. you can save each seperate run if you redirect inside the quotes.

Ex redirection examples:

The end

29Oct/20

Python + Guitar + All Notes

I came across this medium article about python and guitar strings and plotting scales. It has an interesting Jupyter notebook to work with, allowing to plot scales for all of the chords. It was great except it only covered 20 frets. My guitar has 24 frets. So I modified the script to allow for 24 frets & saving the plots. (I have note tested with more then 24)

So you can get something like this:

C Major scale (every whole note):

E Minor Blues scale:

Etc, the rest can be viewed from my Github (go into the Scales directory)

I then came across articles like this that also mapped each guitar string note to a midi value from 16 (low E) to N. N is the highest note. So in my case of 24 frets, N is 64 as that would be the 24th fret on the high E string. Each note increment is +1. I found this fascinating. So I modified the scripts to print all notes + their midi values. Immediately you see the pattern that every 5th fret is the same as the string above it (besides the change from G to B string).

Here is every whole note with midi values. This is also the C major scale:

If the images are too small then just zoom in.

23Oct/20

Python remove duplicates similar to bash uniq + sort

26Jun/20

iostat service time (svctm) rule of thumb

iostat service time is a very useful metric when analyzing disk performance and finding bottlenecks

service time is essentially the inverse of IOPs

so if an operation takes 1ms to service, then your IOPs are 1000 (you can complete 1000 operations in a second if that operation took you 1 ms to complete)

the formula for this is as follows, just put this into google and it will do the math for you:

(S)^-1 = ? hz

S is the service time in milliseconds. ignore the hz word, that just to convert the output to IOPs instead of kiloIOPs or etc..

in your calculator its the equivalent of this:

IOPS=(S/1000)^-1

for 1 ms service time we have 1000 IOPs

for 2 ms service time we have 500 IOPs

for 10 ms service time we have 100 IOPs

Generally, for SSDs I like to see service times between 0 and 1ms (it can jump every now and then above 1ms, but if it does that often look into speeding up your SSDs; perhaps you need to disable or enable disk caching)

For HDDs services times between 1 and 10ms are good. Between 10 and 15 is ok. Anything above and your disks are pretty busy.

Ex: running iostat -x 3 on my NAS we see HDD service times between 1 and 10 so we are good and dont have any HDD bottlenecks

If I saw these numbers consistently on an all SSD NAS then I would be worried.

  • svctm is how long a request takes to process outside of the OS.
  • where as await is how long a requests take to process in all (within the OS and outside of the OS).

I am doing a big write operation on my NAS so you see my w_awaits (write) are reading numbers but not my r_awaits (read). if I was reading then my r_awaits would have values. if I was doing both then I would have values in r_await and w_awaits.

notice that awaits will always be bigger then svctm, this is because its where the time measurements are taken, the await will always be bigger as it adds time it took to process within the OS as well.

Note: this was just a look at awaits + svctm. based on my other metrics, my queue size looks entirely too big, so if my NAS shutdown right now, I would have many operations not written. this might result in a corrupted filesystem.

15Jun/20

Favorite Watch of 2019+2020

I am not a fan of bulky watches, I like them sleek and gorgeous. The winner goes to my friend’s company:

https://durdenwatch.com/

The Durden watch is a sleek and sexy watch. I have had the privilege to own both types. Personally I like the black background watch the most, but the white one was also very beautiful. In the end, both look amazing and last a long time. Mine lasted a year until I lost both of them, I like to imagine they are still running where ever they might be. The white one has better versatility to scratches – you don’t notice them. The only downside is that you will notice scratches on the glass surface easier on the black watch.

Why is it called Durden? The name comes from the Fight Club movie. It is a reference to the main character “Tyler Durden”. My friend has always enjoyed that movie.

So if you are a Fight Club fan, then this watch is a must have.

16Mar/20

Coronavirus Dashboard – covid19.py

Here is my take on a coronavirus dashboard that uses daily updated json data from countries.

Source code (nothing fancy; it uses plotly for the charts): https://github.com/bhbmaster/covid19 .

This dashboard gets its json data from https://pomber.github.io/covid19/timeseries.json

For more info on this data go to the github link: https://pomber.github.io/covid19/

The charts are updated every 6 hours starting at midnight PST. However, the values to the data source are updated daily, so don’t expect new values until after midnight.

17Jan/20

Suggested Robocopy Switches

Note: This article is not mine. It was written on http://www.rainingforks.com/blog/2015/suggested-robocopy-switches-explained.html. I am simply excerpting the whole article. This way if that site ever gets shutdown, at least there will be a copy here. I am not taking any credit for the material on this post. This article was written by Steve Schuler.

Suggested Robocopy Switches Explained

Windows’ robocopy.exe is a great command line program to quickly copy or fully backup your files, but there’s a lot of confusion out there about how to use its (not very well-documented) switches.  Here’s just what you need to know:

First of all, you probably already know that typing “robocopy /?” will give you a long list of switches to choose from.  Start there if you’re confused.  But since there are a LOT of choices, and they’re not well-explained, here’s a run-down of what I typically use, as I go about my day as an IT guy:

The basic format is: robocopy <source path> <destination path> <switches>

NOTE: I’ve found that using robocopy to copy across a network doesn’t always work using mapped drives!  Instead, use the full path (especially important when running as a Task in Windows Task Manager). For example, instead of “robocopy C:\Foo S:\Foo” do this: “robocopy C:\Foo \\SERVER\Foo

Another tip is if you’re using file paths that contain spaces, then you need to enclose each path in quotes. If no spaces, then quotes are optional.

/FFT is necessary to copy between file systems, such as Windows’ NTFS and Linux’s EXT4. If you don’t use this you can get wierdness like files looking like they’re newer than they really are, etc., since the two file systems keep time differently.

/COPYALL copies ALL aspects of the file/directory, including ownership and permissions info. Required if you’re backing up a server or something that you want to maintain group/user permissions, etc. for. (NOTE: Don’t use this switch when copying files from Linux to Windows if you aren’t logged in as the same user with admin rights on both machines! If you do, you’ll get a lot of errors like “A required privilege is not held by the client” and “The revision level is unknown” as it creates a bunch of empty folders at your Windows destination, but skips copying all your files! Instead you can use the /COPY:DT mentioned below, and if you really need to backup your Linux ownership & permissions info, save all the files in a tarball and then just backup that single file containing the directories & files with their attributes intact to Windows.)

/COPY:DT to just copy files & date/time stamps. This is good if you’re just copying some files to give to a friend, and don’t need permissions, etc. copied. Also good for copying files from Linux to Windows (see “note” in /COPYALL above).

/FP outputs the full path so you can more easily see where it is while it’s running.

/MIR exactly mirrors the files & directories, so things at the destination will be deleted if they’re not at the source. (This is the same as using /PURGE (which deletes stuff at the destination that doesn’t match) with /E (which includes Empty subdirectories)

/ZB tells robocopy to use restartable mode (which you want for large files, especially over WAN/unstable connections, since it’ll try to pick up where it left off if the connection gets dropped or there’s corruption mid-copy), and if access is denied, then it’ll use Backup mode, which allows you to copy files you might otherwise not have access to, assuming it’s being run under an account with sufficient privileges (e.g., member of Backup Operators, Administrators, etc.). (NOTE: the /Z switch sometimes slows down the copy speed, so if you don’t need it, don’t use it, especially if you feel like your Robocopy job is taking longer than it should. Sometimes there’s no speed difference, and sometimes it can be dramatic.)

/MT stands for Multi-Threaded, and tells robocopy to copy multiple files at once. The default number of threads is 8 (max is 128), but be careful, as running this over a network can really saturate your bandwidth, leaving none for anyone else. As a result, you may want to skip this one or try specifying less threads by doing something like /MT:2 which will just run two threads (instead of 1, which is what you get if you omit /MT entirely). (NOTE: This is only available in newer versions of Robocopy (Win7/2008R2 and later).  If you’re running older versions (or just don’t feel like bothering with this switch), you can simply open multiple command prompt windows and run it in multiple instances – I often will run two or three Robocopy batch files simultaneously. Also, this switch will make file copy progress numbers confusing, so it’s best to use the /NP switch mentioned below to disable outputting the copy’s progress. Some people speculate that running multiple threads can increase fragmentation, but I haven’t seen any hard evidence of this, and with increasing adoption of solid state drives, it may not matter for much longer anyway.)

/R:1 /W:3 are two switches you probably want to use together to tell robocopy how many times to retry accessing a file (1 in this example), and how long to wait between retries (3 seconds in this example).  If you leave this out, it’ll retry 1 million times with a 30 second wait between each one when it encounters a file it can’t access!!!

/XD is what you use when you want to tell robocopy to skip (i.e., exclude) a directory. Just follow /XD with a space and then the path to what you want excluded. If there are multiple directories you’d like to skip, separate them with a space. For example: /XD “C:\Foo\private stuff” “C:\Foo\plans for world domination”

/LOG:C:\LogFileName.txt /TEE /NP are three switches you’ll want to use together if you want to write the results of the copy to a log file (called “C:\LogFIleName.txt” in this example). If you want it to write what’s happening to the screen as well as to the log file, then you’ll also want to include /TEE.  And, possibly most importantly, you want to include /NP in there so that it does NOT show the progress as each file copies. If you leave this out, then your log file will be filled with every single percentage complete it displays! So you’ll have something like this: “0.0% 0.1% 0.2%” and so on, to 100% FOR EACH FILE, which is nuts.

Finally, if you want to append log file output to the end of an existing file, rather than creating a new file every time, you can modify the above line to include a plus sign, like so: /LOG+:C:\LogFileName.txt

That’s pretty much all you need to know! I figured most of this out from trial & error, as well as some internet searches. Unfortunately, I’ve read so many incorrect, or confused posts about robocopy on the internet over the years, that I wanted to write this to set things straight.  Hopefully you find this helpful – if nothing else, at least I’ll now have a handy reference for the next time I need to put together a robocopy command… 😉

24Dec/19

How to RDP over an SSH tunnel

You can use this trick to access your home PC from a remote PC (like your work PC). Instead of using Teamviewer or other similar software. You can setup up your own encrypted and secure tunnel to work thru. The requirements are basically to have an SSH accessible server in the same network as the PC you want to access.

Server Side Requirements:

  1. Windows PC that you want to connect to (example local IP: 192.168.1.3)
  2. Linux server with SSH in the same network as the Windows PC (example local IP: 192.168.1.2 over port 22)
    1. This can be a virtual machine running off the Windows PC (just as long as it is accessible from the router; so make sure you use a Bridged Network Adapter)
  3. Internet access to the Linux server SSH (example WAN IP: 1.1.1.1)
  4. This can be achieved by setting up a port forward on your router to send traffic destined to port 22 (or any port) from your Router to the Linux server’s port 22.
    1. Example1: route traffic hitting 1.1.1.1 on TCP port 22 to internal port TCP 22 on 192.168.1.2 (example: we port forwarded port TCP 22 from the router to 22 on the linux server)
    2. Example2: route traffic hitting 1.1.1.1 on TCP port 12345 to internal port TCP 22 on 192.168.1.2
  5. Enable RDP on your Windows PC:
    1. Control Panel -> System and Security -> System -> Change Settings -> Remote -> allow RDP connections && uncheck the box “Allow Connections only from computers running Remote Desktop with Network Level Authentication (recommended) -> Select Users and add the Windows User[s] that will be connecting to the RDP

Client requirements

  1. For Windows machines connecting to the rdp tunnel: Make sure your Windows client has Cygwin installed with ssh program (The windows client is the one used to connect)

Verification

  • Verify the setup works by SSHing to your Linux server from a remote location.
  • Also if you can try to connect to your RDP from another PC in your home network. Windows+R then type “mstsc /v:192.168.1.3:3389

How to connect from a Window PC:

For the sake of the example I will use the IPs highlighted as examples.

Open cygwin and run “./sshrdp_cygwin.sh 192.168.1.3:3389 root 1.1.1.1 22” then put in your SSH password. Then the RDP window opens and put in your Windows Login credentials

You can then make an alias in your ~/.bashrc script to alway connect to your homepc (assuming you put sshrdp.sh into your /usr/bin directory).

alias homepc=’/usr/bin/sshrdp_cygwin.sh 192.168.1.3:3389 root 1.1.1.1 22′

From then you can just type homepc on your cygwin and it will launch up

Connect from a Mac

Follow the same steps as Windows but use this script instead. Also since MACs don’t have mstsc. You will need to install and open up RDP software manually each time a tunnel is setup. The sshrdp script will prompt for your SSH server address & give you instructions like

“Open RDP to localhost:10000”

Then you will need to login with your Windows credentials

Here is the MAC version of the same script:

Similarly, you can setup an alias to use on your MAC terminal, except you will need to put it in your ~/.bash_profile instead of your ~/.bash_rc (if I recall correctly, that is how it is done with MACs)

alias homepc=’/usr/bin/sshrdp_mac.sh 192.168.1.3:3389 root 1.1.1.1 22′

Then you can access your home pc by simply typing homepc.

Connect from a Linux Server

You probably just use the MAC steps – although I am not sure and have not tested it. The line of code with the netstat command might need a change / edit.