Perfect Server, Version 18

This is the latest iteration of my perfect server. I am building this in order to consolidate and deprecate previous server inventory. Also, it includes many new best-practices which should further secure this new server.

 

The first step is to provision a new server. I use digital ocean. I will be logged in as root for all of this since this is all stuff that needs to be done as root. If you don’t want to log in as root, you can instead use sudo at the beginning of each command.

Now, add some sources to the package manager. Get there with;

nano /etc/apt/sources.list

Add these repositories;

deb http://ftp.debian.org/debian jessie-backports main

deb http://packages.dotdeb.org jessie all

Update the package manager and upgrade any packages that are available;

apt-get update && apt-get upgrade

Now install all the packages we will need;

apt-get -y install fail2ban apache2 php7.0 php-pear php7.0-mysql php7.0-mcrypt php7.0-mbstring libapache2-mod-php7.0 php7.0-curl screenfetch htop nload curl git unzip ntp mcrypt postfix mailutils php7.0-memcached mysql-server && apt-get install python-certbot-apache -t jessie-backports && a2enmod rewrite && service apache2 restart && mysql_secure_installation

You will be prompted to create a mysql password which you will then immediately be asked for when configuring mysql server securely.

Name Thyself

Now navigate to the virtualhost directory;

cd /etc/apache2/sites-available

Remove the default ssl virtualhost. We will be creating a new one instead.

rm default-ssl.conf

Rename the default virtualhost to the fqdn of the server. Example: server3.website.com. Note that this is not the fqdn of the site(s) we are hosting on the server.

mv 000-default.conf [fqdn].conf

Edit the file and replace the admin email with your own. Change the DocumentRoot to /var/www instead of /var/www/html.

Now add the following block within the virtualhost tag of the file and save it.

<Directory “/var/www”>
AuthType Basic
AuthName “Restricted Content”
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
</Directory>

Lock it Down

Let’s create a credential set for our new virtualhost. This is sort of a catch-all for any domains we point here which are not yet set up.

htpasswd -c /etc/apache2/.htpasswd [username]

You will be prompted for a password. This is very bruteforceable. My best practice is to use a very high entropy strings for both the username and the password. Typically at least 64 bits of random base 64 for each.

Apply Changes

We need to tell apache that we have changed the name of the default virtualhost file. First we disable the one we changed, and then enable our new one.

a2dissite 000-default

a2ensite [fqdn]

Now restart apache

service apache2 restart

Test our changes by navigating to the fdqn of the server. You should be prompted for a username and password.

Administrative Tools

We will need to put some tools in here so we can administer the server.

PHPMyAdmin

This will allow us to manage the databases we will be creating on the server. Head over to their website and get the download link for the current version.

Navigate to our new secure DocumentRoot directory and download that link.

cd /var/www && wget [link]

Now unzip it and remove the zip file we downloaded.

unzip [file] && rm [file]

Now that we have a PHPMyAdmin directory in our secure virtualhost, we need to configure it. Luckily it can do that itself! Use this command and enter the mysql root password when prompted.

mysql -uroot -p < /[unzipped phpmyadmin folder]/sql/create_tables.sql

The last thing PHPMyAdmin needs is a secret string. Edit the config file config.sample.inc.php and save it as nano config.inc.php.

Make sure to add a random string where prompted at the top of the file.

Postfix Outbound-Mail Server

We need to edit the config files for postfix and change the interface to loopback-only like so. We already set up a firewall rule to block connections to port 25, but those rules can be changed by mistake, so this will be a good second line of defense to prevent public access to sending mail through our server, while allowing us to still use it locally.

nano /etc/postfix/main.cf

Find this line;

inet_interfaces = all

And change to;

inet_interfaces = 127.0.0.1

Now edit the email aliases;

nano /etc/aliases

At the end of the file, make sure there is a line that starts with root and ends with your email, like so;

root: email@domain.com

Save the file and exit. Then run newaliases to let Postfix apply the changes. Restarting Postfix is not enough because we changed the interfaces line in the config file. We need to stop and start it like so;

newaliases && postfix stop && postfix start

Now our sites will be able to send emails!

VPS Home

This is something simple I built which serves as a better index page for the secure virtual host and includes several helpful tools for diagnostic purposes. To try it out, run this command from the DocumentRoot directory.

wget https://raw.githubusercontent.com/cjtrowbridge/vps-home/master/index.php

PHPInfo

It’s helpful to be able to access details of the server’s php installation from this directory. I like to create a file called phpinfo.php which contains simply

<?php phpinfo();

Automatic Backups

Create a new file called /root/backup.sh and add the following to it. Make sure to replace the mysql password with yours.

#!/bin/bash

#deletes old backups
find /var/www/backups/ -mindepth 1 -mmin +$((60*24)) -delete

#creates new backups
tar -cf “/var/www/backups/webs-$( date +’%Y-%m-%d_%H-%M-%S’ ).gz” /var/www/webs
/usr/bin/mysqldump -uroot -p[mysql root password] –all-databases | gzip > “/var/www/backups/mysql-$( date +’%Y-%m-%d_%H-%M-%S’ ).sql.gz”

Now edit the crontab with nano /etc/crontab and add this line. This will automatically run that script every day at 8pm.

0 20 * * * root /root/backup.sh > /dev/null 2>&1

Make sure to give the script permission to execute.

chmod 775 /root/backup.sh

Offsite Backups

Now that we have regular backups going, we need to regularly get them off the server. I like Bittorrent Sync for this. It is very fast and seamless.

Run these commands to install Bittorrent Sync;

sh -c “$(curl -fsSL http://debian.yeasoft.net/add-btsync14-repository.sh)”

apt-get update && apt-get install btsync

I recommend selecting the user and group www-data for btsync as this will greatly simplify administration and file permissions.

Then navigate the the port you set up during the installation process and create credentials. As always, be sure to use very high entropy credentials. Then create a shared folder for the backups, and copy the read-write key to your nas to securely copy the backups every day.

The really great thing about this approach is that when your cron job deletes the backups every day on the server, Bittorrent Sync will archive them on your nas so that they are always available. This means that at any point, you can simply take the automatically generated gz of your webs and mysql, and recreate your entire server in minutes.

Migrating Sites In

Move over the files for all the sites you want to host into individual directories in the /var/www/webs directory.

Now navigate to your virtualhosts directory.

cd /etc/apache2/sites-available

We created a default virtualhost file for the server and named it [fqdn].conf. This was the fqdn of the server, but not the sites it will host. Now we want to create our first hosted site. Copy the default file we made to create a new virtualhost like so…

cp [server fqdn].conf [site fqdn].conf

You can use any naming convention you like, but managing dozens or hundreds of these will become impossible if you are not naming them clearly.

Next, we need to add some new things to this hosted site fqdn. Add a new line inside the virtualhost tag like this;

ServerName [site fqdn]

And change the line which has DocumentRoot to point to the directory for this hosted site. For example;

DocumentRoot /var/www/webs/[site fqdn]

Lastly add these two blocks at the end of the file.

<Directorymatch “^/.*/\.git/”>
Order deny,allow
Deny from all
</Directorymatch>

<Directory /var/www/webs/[site fqdn]>
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>

The first block will prevent anyone from navigating into a git repository and accessing sensitive data like credentials or from cloning the repository.

The second block will allow htaccess files or directory rewrites, and prevent directory listing. These are required changes if you want to host WordPress sites, and best practices all around.

Now we just need to enable these changes and make the site live with;

a2ensite [site fqdn] && service apache2 restart

From this point on, this new virtualhost can be copied to create new sites, rather than recreating each one from the original virtualhost file.

Free SSL

We already set up LetsEncrypt so now we just need to run it. Once the domains are set up and pointed to the server’s ip, along with a virtualhost being configured, all it takes is running certbot which takes care of everything.

certbot

How to Build a Free Linux Microsoft SQL Server

This covers how to create a virtual linux server running Microsoft SQL Server.

 

First, create a virtual server with the following requirements in mind.

  • Ubuntu 14.02 LTS (Server or Desktop)
  • At least two CPUs
  • At least 4gb RAM
  • At least 10GB HDD for the operating system
  • PLUS at least double the amount of space your databases will use

When you install Ubuntu, make sure to enable updates and third-party software, unless you’re the real DIY-type.

First, update the default installation packages;

sudo apt-get update && sudo apt-get upgrade

Now we need to install a few tools before we can get started;

sudo apt-get install cifs-utils curl

Install Microsoft SQL Server

Run these commands to install Microsoft SQL Server and its tools;

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list

sudo apt-get update && sudo apt-get install -y mssql-server

sudo /opt/mssql/bin/mssql-conf setup

You will be prompted to create an SA or Server Administrator password. Use something with high entropy!

Now install tools;

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list

sudo apt-get update && sudo apt-get install mssql-tools unixodbc-dev

echo ‘export PATH=”$PATH:/opt/mssql-tools/bin”‘ >> ~/.bash_profile

echo ‘export PATH=”$PATH:/opt/mssql-tools/bin”‘ >> ~/.bashrc

source ~/.bashrc

Now, copy over your backup file and put it in /var/opt/mssql/data/

I have not gotten the cli tools to work for importing the backups. They seem to look for Windows paths. You will need to use MS SQL Studio to import the backup.

How To Create a Local Storage Repository on XenServer

I was working with a XenServer in a complex corporate network environment, and it was not possible for this server to access any samba shares, such as my laptop. I needed to put some ISOs on it, so I decided to create a local storage repository. This way, I would be able to simply -wget an ISO from the web, and then use it locally.

 

First, SSH into the XenServer and create a directory for the repository;

mkdir -p /var/opt/xen/LocalRepo

Then, tell Xen to create a XenServer Storage Repository at that directory;

xe sr-create name-label=LocalRepo type=iso device-config:location=/var/opt/xen/LocalRepo device-config:legacy_mode=true content-type=iso

Now move to the directory and then wget whatever ISOs you need…

cd /var/opt/xen/LocalRepo

 

wget http://releases.ubuntu.com/16.04.2/ubuntu-16.04.2-desktop-amd64.iso

Now you’re cooking with gas!

 

PS. Make sure to check your free space and make sure your ISOs will fit. This partition is not very big by default.

df -H

How to Start a Business For Free (With Examples)

I gave this speech for a public speaking class. It included a self-evaluation assignment which I share here;

 

CJ Trowbridge

2017-07-05

Sierra College Comms 1

Scott Kirchner

Demonstration Speech Self-Evaluation Assignment

In my speech, I demonstrated how to bootstrap a business. I gave examples from my experience bootstrapping a pizza business in Chico several years ago. I felt like the speech went very well. I shared the video online and received good feedback, and my peers seemed to feel that it went well based on their reactions during the speech and afterwards. (I used a special camera to record the speech which captures the audience as well as the speaker, so I was able to review their reactions.)

When I was composing the outline for the speech and rehearsing it, I tried to make it as relatable as possible. I made sure to include at least a few concrete examples whenever I discussed abstract ideas. I find this generally lacking in most entrepreneurial literature, so I think and hope that I improved on this frustrating trend. I feel like most people can relate to this topic if it is presented properly. For these reasons, I think the content was good.

My last-minute addition of a visual aid was also a really great touch. It was more than just the visual effect, or even the smell; it was visceral. I think it really grabbed attention, and it made the value-proposition of the content become a visceral feeling for the audience. Hunger is a limbic response, a deep emotional thing. It supersedes the prefrontal cortex and the trained analytic mind. This was a major underlying theme in my speech; take the product to the people who don’t know they want it, and make them want it. I demonstrated that without even talking about it. My clincher about how the audience could take the pizza into the quad right now and quadruple the money seemed to leave them with ideas about how they could implement the ideas I had discussed. Several audience members approached me about business ideas they had and how they might bootstrap them like I did. I think this part of the speech was very effective.

In general, I would say I was not very anxious about this speech. I have had a great deal of public speaking experience from a young age, BUT a big part of what little anxiety I did have was timing. I am not used to timed speeches. To alleviate this anxiety, I decided to include several quick stories in my concrete examples for each abstraction. Then, I could expand on the stories as required to get to the correct time. I think concrete examples were a good idea, but I think the stories went too long, and this was the one development opportunity identified by the professor, who said I “Squirrelled,” (or went on tangents or rabbit trails) in his remarks at the end of the speech. This had been a deliberate and strategic effort to fill time, but obviously it distracted from the content. I will try to expand on concrete details next time, or perhaps use a story as one of the major points, rather than trying to incorporate several into sub-points. Also, I should have defined the “unfamiliar” word bootstrap as soon as I first used it.

This implies a different structure would be better. Rather than enumerating abstractions and then providing concrete examples and stories, a better strategy might be to enumerate several abstractions and provide concrete examples only, then finish up with a brief story to tie everything together. This also means timing would be harder, and I will need a better strategy for making sure the time is correct. I think doing some sort of outline for the ending-story and then selectively condensing it would be a better strategy for getting the time correct.

How To Migrate Production MySQL Database Servers

Migrating database servers in production, whether they are large or small, is a surprisingly simple process but a delicate one. There are fewer methods available as the size goes up. This is something I have been asked about by several colleagues so I decided to document my processes and best practices.

The File Method (BEST)

This assumes you can afford a few minutes of downtime, and that you are using Debian/Ubuntu or something like it, and MySQL on both servers.

Take your database application offline, and then on the origin server run this command;

/usr/bin/mysqldump -u[Username] -p[Password] [Database] > backup.sql

This will take a while and give you the sql file you need. I just moved the file to a directory on the origin server where I could wget it from the destination server. But you could also email it to yourself or use a flash drive or network share.

Now on the destination server, create a new blank database with the same name and run this command in the same directory as that file;

mysql -u[Username] -p [Database]< backup.sql

You will be prompted for the new server’s password and then it will put all the content into the new database.

This is not a complex process, but it is a powerful one because it works no matter how large and complex the database is. I recently used this for a database containing millions of rows in dozens of tables.

PHPMyAdmin Method (Easiest)

If your database is just a few megabytes or less, you can use PHPMyAdmin to transfer everything to a new server.

On the origin server, navigate to the database you want to migrate, and click the “Export” tab at the top.

Select “Custom” for the export option, and then “View Output as Text” and submit.

This will take a while and give you a text box full of SQL code. This is the same code that would be in the file we generate in the “File Method” mentioned above. If the database is too large, this step could freeze or crash the browser, but for smaller databases, this should work fine.

Now copy that text and navigate to the destination server’s PHPMyAdmin installation. Create a database with the same name, and navigate to the SQL tab within that database. Paste the code there and click “Go.”

You have successfully migrated your database!

The Replication Method (Hardest)

Digital Ocean has a great tutorial on this alternative option, but it is a lot more technically complex. I would not advise this unless 100% uptime is critical. Using the file method will only give you a few minutes of downtime, and if you can’t afford that you probably shouldn’t need this tutorial ;P

If 100% uptime is critical, use this tutorial to set up slave replication to the new server, then switch the application load over to that server, and disable slave replication.

You have now migrated production servers with zero downtime! (But really, you should use the File method. It is much less complex and far easier to implement.)

Startup 2: RSI Alert

This is part of a series on Building 12 Startups in 12 Months.

This is product number two: RSIAlert.com!

What Inspired This Project?

I follow a few dozen stocks and do some day trading in my spare time. Working on a previous project Securities.Science, I did some research into strategies using RSI to decide when to buy and sell stocks. I got some feedback from the first users of that project about how they would like to be able to receive email alerts at certain indicator points.

For example, one simulation on Securities.Science explored trading based on the RSI-14, or the RSI for the previous 14 trading days. Whenever the RSI-14 of a stock is below 30, the simulation buys, and then sells at close on the same day. Run against the previous year’s data, this simulation indicated a 136% return. This number could easily be improved upon by selling at a better point than close, but that’s another story.

Initially, I explored trying to add email alerts to queries inside Securities.Science, but it really isn’t set up to work that way. Users construct arbitrary datasets which would be difficult to integrate into a mail trigger system, and there is obviously potential risk of abuse with automated outbound emails. I decided to build a new product which focuses on only this one type of automated email.

This product is very simple compared to the other ones I am considering for this challenge. It just shows a list of a few high-return securities and their RSI-14 as of the previous close. Users can sign up to receive email alerts each day letting them know when the RSI-14 of any of the securities is below 30.

What Exactly is RSI?

From Investopedia:

“The relative strength index (RSI) is a momentum indicator developed by noted technical analyst Welles Wilder, that compares the magnitude of recent gains and losses over a specified time period to measure speed and change of price movements of a security. It is primarily used to attempt to identify overbought or oversold conditions in the trading of an asset…

The RSI provides a relative evaluation of the strength of a security’s recent price performance, thus making it a momentum indicator. RSI values range from 0 to 100. The default time frame for comparing up periods to down periods is 14, as in 14 trading days…

Traditional interpretation and usage of the RSI is that RSI values of 70 or above indicate that a security is becoming overbought or overvalued, and therefore may be primed for a trend reversal or corrective pullback in price. On the other side of RSI values, an RSI reading of 30 or below is commonly interpreted as indicating an oversold or undervalued condition that may signal a trend change or corrective price reversal to the upside.”

Straightforward Monetization

Monetization will be very straightforward; ads on the site and maybe in emails. This project also has the potential to expand into other verticals for various other things people may want automated emails about.

Lack of Similar Products
Means Competitive Advantage

As far as I am aware, there is no other product which does what this does, or I would be using that myself rather than building this.

A few companies have put together similar models for other topics, like Medium which offers a daily email containing some stories they have picked for you to read.

I was also partially inspired by IFTTT which allows you to set up automated emails for lots of different things, but it all requires domain expertise and setting it up involves some degree of technical complexity. This product and any future expansion is designed to be idiot-proof, with a broad market in mind.

Future Features

One obvious next step would be to integrate this with my upcoming current-events project; automatically including relevant news content related to these data would be valuable data for users to analyze along with the RSI-14 data.

Setting Up PHP Apache2 MySQL MSSQL Server on Azure

First, update the packages;

sudo apt-get update && sudo apt-get upgrade

Set up initial applications;

sudo apt-get -y install unzip fail2ban apache2 mysql-server php5 php5-curl php-pear php5-mysql php5-mcrypt screenfetch htop nload curl git ntp freetds-common freetds-bin unixodbc php5-sybase && sudo php5enmod mcrypt && sudo a2enmod rewrite && sudo service apache2 restart && sudo mysql_secure_installation

Set Up SMTP Email with Postfix

Create firewall rule to block smtp access (This is redundant because we will configure SMTP for loopback access only);

sudo iptables -A INPUT -i eth0 -j REJECT -p tcp –dport 25

Install Postfix;

sudo apt-get -y install postfix && sudo apt-get -y install mailutils

Edit postfix config file;

sudo nano /etc/postfix/main.cf

Change “inet_interfaces = all” to “inet_interfaces = 127.0.0.1” allowing only loopback requests. This is in addition to the firewall which prevents outside access.

Edit aliases list;

sudo nano /etc/aliases

Append this to the end and save;

root email@domain.com

Run this to apply the changes;

sudo newaliases && sudo postfix stop && sudo postfix start

Edit the default virtualhost

sudo nano /etc/apache2/sites-available/000-default.conf

Set the ServerName to the fqdn. Save and restart apache2;

sudo service apache2 restart

Edit aptitude’s sources list;

sudo nano /etc/apt/sources.list

 

Set Up LetsEncrypt for SSL/HTTPS

Install LetsEncrypt Certbot;

sudo apt-get install python-certbot-apache -t jessie-backports

(This may require extra steps. The Debian default aptitude sources list does not contain backports, but the default Azure list does.)

Run Certbot to install HTTPS;

sudo apt-get install python-certbot-apache -t jessie-backports

Create a credential set with some high-entropy username and password combination. I like to use a 32-bit random key for both. This will only be transmitted through an SSL 1.2 connection with LetsEncrypt, so it’s very safe;

sudo htpasswd -c /etc/apache2/.htpasswd [Username]

Edit the ssl-virtualhost and add this within the virtualhost tag;

<Directory “/var/www/”>
AuthType Basic
AuthName “Restricted Content”
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
</Directory>

Add this to the end of the end of the file, outside the virtualhost tag in order to enable htaccess if you’re going to need that;

<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>

Restart Apache;

sudo service apache2 restart

Set Up PHPMyAdmin

PHPMyAdmin can also be installed via aptitude, but that exposes it publically and there is some potential for exploits down the road. This way you can;t access it until you get past the virtualhost password we set up earlier.

Head over to PHPMyAdmin and clone the current version into the /var/www directory;

sudo wget https://files.phpmyadmin.net/phpMyAdmin/[Version]/phpMyAdmin-[Version]-all-languages.zip

Unzip it;

sudo unzip phpMyAdmin-[Version]-all-languages.zip

PHPMyAdmin It will prompt you for a blowfish secret. Navigate to its directory and copy the sample config file;

sudo cp config.sample.inc.php config.inc.php

Open the new file and look for this line…

$cfg[‘blowfish_secret’] = ”;

I wrote a tool which comes up with a perfectly sized high-entropy string to put here. Check it out.

Once that is entered, navigate to the sql/ directory within the PHPMyAdmin folder and run this to set the tables up. It will prompt you for the password you set up earlier;

sudo mysql -u root -p < create_tables.sql

I Made This Simple Stats Tool

VPS-Home is a simple tool I made some time ago which shows a few important things. It shows the free space on the disk, the disk utilization for each directory within /var/www and the top running processes at the moment, along with the runtime and motd.

Install it by simply downloading it into the virtualhost we made;

sudo wget https://raw.githubusercontent.com/cjtrowbridge/vps-home/master/index.php

Connect to MSSQL

We installed FreeTDS which allows for tabular data-stream connections to Microsoft SQL Server from PHP5.

Test it with this command;

tsql -H [host/ip] -p [port] -U [username] -P [password] -D [database]

You should see something like this;

locale is “en_US.UTF-8”
locale charset is “UTF-8”
using default charset “UTF-8”
Default database being set to [database]
1>

If you see something about “Unable to connect: Adaptive Server is unavailable or does not exist” that is ok too. Edit /etc/freetds/freetds.conf and add this to the end;

[nickname]
host = [host/ip]
port = [port]
tds version = 7.0

Your version may vary. For MS SQL Server 2008, this was the version I used.

 

Now you can use mssql_query() in PHP5

to build server applications with Microsoft SQL Server!

Startup 1: Securities Science

This is part of a series on Building 12 Startups in 12 Months.

This is number one: Securities.Science!

What Inspired This Project?

My first startup in the series is Securities.Science. It lets users run queries against historic stock trading data in order to test theories and strategies. All data is public and everyone can see the work that others are doing.

This started with my coworker Luke Leggio and I trying to collaborate on developing strategies for trading leveraged commodity ETFs on RobinHood. I was very frustrated with the few tools and communities that exist for this purpose.

I had tried Openfolio which has since pivoted to a totally different kind of product. At the time, they let you share your trading activity and results with others and compare to how their strategies worked out for them. The problem was that it was terribly buggy and often reported things incorrectly. I wrote to their support people several times, even offering to do the work of fixing their products for them because the problems were so obvious. (Numbers being negative instead of positive when pulled from certain APIs, etc.) Some features like search and viewing the top performers didn’t work at all. They had no interest in making their product work, so I decided to make my own as an alternative.

Securities.Science automatically pulls data from various public APIs and allow users to write SQL queries that implement securities trading strategies. Their queries will pair with simple visualization tools in order to show how each strategy works over time.

First Steps

The site is now live, and the source code is all available on Github. Anyone can sign up for free and start running queries against historic datasets.

I have included lots of different tickers including all of the leveraged commodity ETFs which I follow, along with all the top stocks millennials like according to Business Insider. Adding more is trivially easy, but I didn’t want to just add thousands of tickers because of the maintenance overhead. And because most of them are not particularly interesting.

I wrote this as a plugin for Astria, a simple web application framework I have been developing for almost a decade. The code is very simple and hopefully distilled to the minimum necessary to explain the content. Check it out!

Next Steps

There are a few next steps that jump out at me if this finds adoption.

Expanded Datasets

The page describing available data encourages the user to reach out to me if they want to see any additional data sources. Eventually, users should be able to add data sources for whatever they want with simple tools.

Content Development

Scraping and collating data is one thing, but presenting it in a format which brings in organic traffic is a separate art. Other news and data sources relating to each stock could be integrated so that users can focus on particular industries, commodities, or ETFs and get more information than just trading data.

Execution Integration

There are lots of great APIs which would allow integration with stock brokerages so that users can set up triggers for buying and selling based on their models in the app. It would be fun to add that later.

Machine Learning and Other Advanced Analytics

The first version of the product only features SQL queries for strategy development. This enables lots of interesting and basic strategies to be implemented and tested, but adding machine learning and other advanced analytics features would be another order of magnitude in capability for users.

 

Getting Started With Golang

I went to a Software Engineering Daily meetup a couple days ago and spoke with several CEOs whose companies are focused on data science and machine learning. I asked about what languages they are looking for in new hires. These conversations cemented my desire to learn Golang in conjunction with TensorFlow as my next major engineering paradigm.

“Hello, World!”

I started with a new Debian droplet at Digital Ocean (Referal Link) and followed this tutorial from Digital Ocean to set up the server with the latest version of Golang and print out my first “Hello, World.”

Next Steps

Golang’s site has a tutorial called Writing Web Applications which seems fairly comprehensive. I am going to work through this tutorial and then get started on the TensorFlow tutorials.

Implementing the Webroot SecureAnywhere Business Endpoint Protection API in PHP

At Tech 2U, we sell Webroot SecureAnywhere Business Endpoint Protection as our antivirus product. This is typically used for managed enterprise endpoints.

Webroot SecureAnywhere Business Endpoint Protection is managed through a web console which must be accessed each time a new endpoint is created; dozens of times every day for us. This web console is intended to be used for companies managing dozens or maybe a hundred endpoints. We use it to manage many thousands of endpoints. This quickly led to the web console being so slow and unresponsive that it became unusable, taking minutes to load each page. I decided to implement their API in order to avoid using the web console and automate the process of creating keys.

I had already built a comprehensive custom PHP/MySQL CRM to manage all operations at Tech 2U, so this new API integration would need to simply create keys whenever they are sold and show them to the person selling them.

I came up with this: PHP-WebrootAPI.

It gives you one main function: MakeWebrootKey(); This is pretty straightforward and allows you to create keys by passing in the customer’s information. The keys are then stored in a local table and accessible from the customer’s profile page, or by searching.

Getting this API implemented was very tricky because their documentation is terrible and they don’t respond to tickets. I ended up combing through their web console’s html to find many of the missing pieces. This is the only way I could find to get the GSM key IDs, Policy IDs and some of the other credentials. Once I had all of those though, implementing the rest of the API fell into place.

Check it out and email me if you need help or have any questions!

 

There is a tremendous amount of customer information in this API and I can’t wait to integrate it into my marketing automation platform! 😀