Quit Your Mediocre Job And Get An MBA

Going to college is an investment, but many people assume that just any college is a good investment. I know a lot of people who have gotten liberal arts bachelor’s degrees and ended up stuck working in restaurants for years, unable to find a real job. This example is a waste of an education. All that money and time spent and they are no better off than someone who didn’t go to college at all. And they probably have debt to pay back.

I decided to wait over a decade to go to school until I decided what I wanted to do and came up with a coherent plan to actually get a return on this huge investment.

If you find yourself working a mediocre job which you hate, then maybe it’s time to do more. So what would happen if you just quit your job today, take out student loans, and go to school for an MBA?

According to research done by US News in 2016, 88% of students who get an MBA find a job within three months making an average of $126,919. That’s the average. Consider the average person for a moment and ask yourself whether you’ll be ahead of that curve. According to Bloomberg, the average person triples their previous salary when getting an MBA from ~$50k to ~$145k.

Cold Turkey

Imagine quitting your job today and starting the path to your MBA tomorrow.

To save money and improve your chances of getting into a good school, you decide to start by finishing the IGETC and Assist.org at a community college. If you’re not working at all, then tuition is free, and you will get about $3k/semester in financial aid. Let’s assume you take out about $20k in student loans along with that financial aid to cover living expenses while getting through Assist.org and IGETC. This number is deliberately high; my own amount was much lower. And I went full-time at two different community colleges to speed up the process.

Now it’s time for a four year school, BUT since we did IGETC and Assist, we’re already halfway done. Let’s assume we decide on a mid-range state school like San Diego State University and a bachelor’s that actually has job potential like engineering, marketing, or computer science rather than something pointless like psychology or art. There are cheaper options and more expensive options out there, but the important thing is to get a degree that is actually going to mean something to an employer, otherwise what’s the point?

According to CollegeData.com, average cost (Includes tuition, room and board, supplies and other expenses) for in state students at San Diego State University is $28,224 minus an average financial aid award of $11,400. So that’s $16,824 per year. Since we did the IGETC and Assist at community college, we’re only spending two years here which comes out to a total of $33,648 which goes onto a student loan.

Alright so now our total loan principal is $50,648 and we have a valuable bachelor’s degree. Time for that MBA.

Bloomberg has really comprehensive research on this, and they put the average cost of an MBA in the US at just $53k plus living expenses. There is no financial aid for Masters students so let’s add another $20k in debt to cover living expenses while we are doing the Master’s program.

Now we have our MBA and debt of around $123,648.

Less Debt Than Income

Remember from above that within three months on average, MBA grads will be making an average of $126,919/year. We could pay all of this debt off the first year if we are as frugal as we have been while in college, or more likely we will spread it out over the next few years and enjoy some of the fruits of our labor. The point is that this amount of debt is trivial for an MBA grad. On average, the total amount of debt is LESS than the starting annual salary.

Having student loans and paying them off is a  great way to demonstrate you are creditworthy. Once you get past the educational hurdle and triple your income, you will be able to do things you never could have before.

Just Do It

It’s scary to leave the comfortable routine and reinvest in one’s future, but it makes sense for anyone smart and capable to make a choice like this, especially if they are tired of wasting time doing mediocre work for mediocre rewards. Life is too short!

Years of Problems Solved


#deletes old backups
find /var/www/backups/ -mindepth 1 -mmin +$((60*24)) -delete

#creates new backups
tar -czf “/var/www/backups/webs-$( date +’%Y-%m-%d_%H-%M-%S’ ).gz” /var/www/webs
/usr/bin/mysqldump -uroot -p[mysql root password] –all-databases | gzip > “/var/www/backups/mysql-$( date +’%Y-%m-%d_%H-%M-%S’ ).sql.gz”

This is a script I wrote years ago that shall live in infamy. It creates an automatic backup of databases and virtualhost directories. It is called by a cron job each day, and builds the new archive files, depositing them into the backups folder.

The backups folder is a Bittorrent Sync repository which automatically copies the backups to other NAS servers. This script also deletes the old backups each day as you can see at the top. Because the files are deleted on this server, the remote repositories they are syncing with retain old versions. This means that all prior backups are saved on the remote NAS server, but only the most recent backups are ever stored locally.

Because the files are transfered with the bittorrent protocol, they are end-to-end encrypted, and highly available across an unlimited number of nodes. So the remote NAS servers will share the backups with each other if necessary.

This is a super good system which accomplishes highly available and completely free, secure offsite backups. The remote server has a highly secure virtualhost which shares the backups, so they are available to other command line scripts which can fetch them and deploy new versions of these servers in seconds.

Also keep in mind this is a simplified version of the script. The actual script I use will create a separate backup file for each database and for each virtualhost. This script will create one single archive of all databases and one single archive of all virtualhosts. This is still a good system, but it is less easy to deploy one single virtualhost or database this way if a server is hosting more than one.

The Symptom

Every once in a while after some unknown period of time, the Bittorrent Sync stops working. It says there is an unknown error, and it has to be reconfigured. Then it works fine for some unknown period of time but this ALWAYS ALWAYS happens again.

It only happened on some servers, despite the same script running on all of them. (I now realize the reason.)

I tried for a long time to figure it out but I chocked it up to a bug in BTSync because it is a mildly jank, gratis, closed-source, and long-discontinued product. I just kept periodically reconfiguring BTSync and everything kept working, despite this little annoyance.

The Cause

Bittorrent Sync stores the configuration files for each repository in a hidden directory within that repository called “.sync”. This works just like git does.

When my script deletes old files in that directory, it also deletes the Bittorrent Sync configuration files, then BTSync crashes until it is reconfigured.

The Fix

This is the new script which solves this problem;


#deletes old backups

find /var/www/backups/www/ -mindepth 1 -mmin +$((60*24)) -delete

find /var/www/backups/mysql/ -mindepth 1 -mmin +$((60*24)) -delete


#creates new backups
tar -czf “/var/www/backups/www/webs-$( date +’%Y-%m-%d_%H-%M-%S’ ).gz” /var/www/webs
/usr/bin/mysqldump -uroot -p[mysql root password] –all-databases | gzip > “/var/www/backups/mysql/mysql-$( date +’%Y-%m-%d_%H-%M-%S’ ).sql.gz”

As you can see, each type of backup now goes into its own subdirectory. Backups are only deleted from the subdirectories. The config files are no longer effected when backups are deleted.



Startup 9: What are you wearing today?

Like many people, I am a student. I attend over a dozen classes at two different colleges. I also attend regular social events and networking meetups. I often wonder as I am dressing, is this the same thing I wore last time I went to the place I’m heading now?

It sounds silly, but it’s something I always ask myself because I want to make the right impression. This is further complicated by the fact that I am something of a minimalist, so I keep less than ten shirts at any given time and only a few pair of pants/ shorts.

I was talking one day to my mom about this anxiety I feel, and she emphatically agreed.

I decided to make a simple app to keep track of what I wear each day so that I can look back and see what I wore last time I was at a particular class or event and know not to wear the same thing again. It’s schmuck insurance.

Wearing.Today is the result!


The minimum viable product version of this app is not social; each user’s profile is private. Users can post pictures from a simple single-page app which also lets them edit each post’s blurb, or delete their posts.

Paradigm Shift

This app is written completely in the functional paradigm. I wrote it as a single page html/js application which is built to be hosted on S3 and take advantage of Lambda for the functions.

I did not actually deploy it there because I Don’t already have accounts with those services, and they don’t support my primary language, PHP. The point was not to actually do those things, but to rapidly write a simple, complete app in that paradigm and get a sense of the workflow and how to construct the API.

Startup 8: Stardate.Today

For years, I have calendared prolifically. (Is that a word?) I track all the granular details: my to-do list is on one calendar; all my classes and homework; my social life; gym, yoga and exercise; and previously my jobs at Tech 2u and Starbucks. Each of these topics is on its own color-coded calendar within Google Calendar. You can take a look at what my weeks look like at cjtrowbridge.com/calendar.

There is a problem though. Sometimes I wonder what I accomplished on a particular day off or with my free time after work. I go back in my calendar to check, and there is nothing there; just a blank spot where I neglected to note the day’s events. It’s hard to quantify how well I use free time or time at work when I don’t have any record of what I accomplished. What if I could make like Janeway and just shout at the wall like it’s my journal?

Journaling always sounded interesting, but there’s no way I am going to lug around paper and a pen to write in it, and even if I did, it wouldn’t be searchable.

Enter Stardate.Today. This simple tool lets me type out my stream of consciousness into private posts, and then adds them to a timeline for me to search back through. They are also added to my calendar so I can see when each thing happened.

It takes just seconds to get started. Simply log in with google and you will be given a link which you can enter into your phone or any calendar tool. On the homepage, you are presented with a simple box to type in and a list of your past posts. When you enter a post, it is automatically added to your calendar.

Simple as that.

[Working Draft] Smarter Sockets

This is something I looked long and hard for, before deciding to build it myself. Also, a school project pulled this off the shelf and added some urgency. Last year I completed a similar but very early proof-of-concept prototype just for the web-controlled relay board. It didn’t have the parallel shift registers, the relay board, or the actual sockets. I never finished it or came close to building an actual final product.

[Take a better picture]

This prototype built on an Arduino controlled relay board which in turn drove a series of electrical sockets controlled by a web page over an Ethernet connection. [Feature link to commits where I shared this with the Arduino community.] But it also let’s you monitor consumption and prevents outages with a battery backup.

This project combines these things;

There is not currently any product that does this which I have been able to find. And as a minimalist who is committed to conserving energy, being independent, and being aware of my footprint, I was really hoping to find something like this out there.

Improving on Kill-A-Watt

Kill-a-watt is a very interesting product which nevertheless has some huge development opportunities.

It lets you see power usage for a single electric socket. There are several problems with this.

For one, it only works for one socket.

Two, it doesn’t let you see the data except through the limited interface. There are no graphs of usage over time and there is no web accessibility for the data.

Expanding on this idea, each socket on my new device will allow you to see power consumption over time. There will be a clean web-based interface which lets you see any unusual spikes and be responsible with your energy consumption. This will pair well with the smart-socket feature which will enable you to turn things off when you are not using them or when they are using too much energy.

Improving on Smart-Sockets

Smart sockets are very limited. They typically offer only one socket, and do not offer usage metrics. Also, they feature very poorly implemented security and control software.

[Explore Steve Gibson’s IoT security reviews]

Improving on these widespread industry problems will be an important and valuable step.

At the hardware level, offering multiple smart-sockets is already a huge improvement, as is offering usage metrics, but there is room to improve further. Another major feature of this project is feature granularity. I want to make sure and give enough detail so that developers can create multiple physical formats. Maybe you only want one socket, not eight. Why not?

There is no reason this system can not fit into a wall socket and replace the old-fashioned ones you already have. Imagine removing the mess of plugging devices into devices into the wall, and just put the smart-socket inside the wall.

[include diagram]

Including a UPS (Might take this out)

I started with this very thorough tutorial which does a great job of explaining the terms and options that differentiate existing UPS products.

I found a discarded 2kw UPS with a dead logic board at a computer repair shop which I was able to get for free. All 2 kilowatts of sealed lead-acid batteries worked fine, it was just a bad control board. :] This was exceptionally lucky, but you may be able to find something similar if you look.

I had also explored scavenging 18650 battery cells. These are very popular for DIY UPS builders. This great alternative option would also probably scale better than sealed lead-acid and charge or discharge much faster. There are lots of places like battery stores that will happily give you free, “dead” laptop batteries full of these cells. Typically it is just the control board and maybe one or two of a half-dozen cells which is actually bad. The rest will still work fine in most cases.

Choosing a UPS Paradigm

The linked tutorial describes three main types of UPS. I chose the Online type, “The Online UPS unit completely isolates the devices attached to it from the wall power. Instead of jumping into action at the first sign of power out or voltage regulation issues like the Standby and Line-Interactive units, the Online UPS unit continuously filters the wall power through the battery system. Because the attached electronics run completely off the battery bank (which is being perpetually topped off by the external power supply), there is never a single millisecond of power interruption when there is power loss or voltage regulation issues. The Online UPS unit, then, is effectively an electronic firewall between your devices and the outside world, scrubbing and stabilizing all the electricity your devices are ever exposed to.”

If it’s more expensive, why choose this type?

The goal of this project is radical energy independence. I want this to be expandable and compatible with eventual solar or wind power generation. This basically fills the same role as a Tesla Powerwall.

For reference, I found a great online community focused on cloning the Tesla Powerwall. There are lots of great ideas and examples in there.

Future Direction

There are tons of potential directions the project could go. For example, the smart-sockets could easily include powerline-wifi-adapters to replace wifi access points and greatly increase the wifi availability in your home while eliminating obnoxious and unnecessary hardware and wires.


This project includes a public repository of all the code and plans which anyone can contribute to, and a forum for discussing it. I will try to make it fairly modular and platform-agnostic so that people can use different hardware and still have a safe and secure system.

Building a MVP/POC


The core of the project is an Arduino/compatible processor and a network stack. This can be Wifi or Ethernet. There are some great examples which combine these parts together and even include the relay board if you would rather use that.

I will be using the brand-name Arduino Uno and the Arduino Ethernet Shield, only because I already had these. If I was buying them for this project, I would probably use the one I linked to in the previous paragraph because Wifi would be a great feature for this project.

This is going to take a lot of different parts which need to connect to the Arduino. We need to use a parallel shift register (Explanation) in order to control lots of things with just a few pins.

A simple relay driver board does the heavy lifting of turning the sockets on and off.

Measuring the current through each socket will require a series of special sensors wired inline and then connected to the Arduino. Alternatively, there are several other examples I am exploring for this part.

The main power will come from the battery bank and go through a cigarette lighter dc-ac converter before hitting the gfci socket and then the relay board. This makes the whole thing very safe because there are several circuit breakers built into each of these levels.

The battery bank will be charged by a standard ATX PC power supply which will automatically be turned on and off by the Arduino when the power level requires it. (This means we only get seven sockets since one of the eight relays will control the charging supply.

Future Steps

The most obvious future step would be making an actual ready-to-order product which people can buy. This would require a great deal of funding since there would be regulatory requirements and manufacturing costs, but I really think this is something people would buy.

If people are willing to pay $30 or more for only a single smart-socket which does not measure usage, it makes a lot of sense that people would be willing to pay even more for more features and expandability in a device which offers multiple sockets with valuable metrics about usage.

Startup 7: Top Story Review

This is part of a series on Building 12 Startups in 12 Months.

This is number seven: TopStoryReview.com!

Black-Box News is Bad

If you look at the news on Google or Facebook , you will see a few stories which some mysterious algorithm has selected for you. Are these an accurate reflection of current events? No. These stories are often selected to confirm your biases based on your activity and search history. You are seeing the echo-chamber your digital context has created for you, because that is how these companies maximize for your attention in order capture your attention.

Facebook and Google use black-box algorithms to pick what you see. This means that not only is there no explanation of how or why your stories were picked, but there is no way even for the engineers to reverse-engineer the algorithm and see how or why it picked the stories it did.

It is very common to see stories featured as trending on Facebook which are false, or shared through other services. This effect has led to people doing horrible things based on false information presented as news by algorithms, like shooting up a pizza place or threatening and harassing the families of murdered children.

This is a problem for democracy and a problem for all of humanity. There has been much speculation that this has been a major contributing factor in the recent rise of populism in America and the results of the recent presidential election. Everyone should have access to concise, accurate snapshots of current events. My frustration with the lack of quality and lack of transparency in news aggregation services today led me to create an open-source alternative which omits biases and individual context.

But How?

This project expands on an experiment I started in high-school. I was trying to aggregate various high quality news sources and then determine what major themes were trending, and present that information in a useful way.

Back then, I started with fetching the “Top-Stories” or “Breaking News” RSS feeds from a few dozen newspapers around the US, and combining them all into a MySQL table and then doing word counts to determine which words stood out, and classifying those in order to build a list of general topics, then I displayed the most recent or most reputable source’s story for each of the topics.

There are several big innovations I have come up with over the years which I can now incorporate into this idea in order to maximize value.

The final product will be simple homepages for lots of topics with a few bullet-points, imparting a concise and accurate representation of the current state of events.

My Open Algorithm

  1. Pull in thousands of rss feeds from an open list of high-quality news sources all over the world.
  2. Analyze the stories to find trending topics using my open condensr algorithm.
  3. Get all the stories relating to each topic.
  4. Condense all of the stories on each topic down to just one sentence.

Step 4 expands on the work of groups like SMMRY and Reddit’s autotldr bot. My new Condense algorithm can summarize thousands of pages of text into just a sentence or two. This is obviously not perfect, but it is surprisingly good, and there is always room to improve later.


The basic structure of the site will be a home page which combines all topics, and then lots of individual pages for each category. The home page and each topic page has a bullet-point list of a few trending stories with one sentence summaries. These bullet-point summaries will eventually link to a story page with a longer summary along with links to recent reporting from various sources.

The algorithm runs every hour, and maintains an archive of all the pages, so you can also look back at what was happening at any certain time.

In effect, I am building a massive pipeline that takes in much of the world’s reporting and produces high quality condensed content which is much easier to absorb.

Making Money

There will be an enormous amount of content created hourly, and that means lots of organic traffic. SEO and social integration will be critical. I will eventually include ads to monetize the content.

Perfect Server, Version 18

This is the latest iteration of my perfect server. I am building this in order to consolidate and deprecate previous server inventory. Also, it includes many new best-practices which should further secure this new server.


The first step is to provision a new server. I use digital ocean. I will be logged in as root for all of this since this is all stuff that needs to be done as root. If you don’t want to log in as root, you can instead use sudo at the beginning of each command.

Now, add some sources to the package manager. Get there with;

nano /etc/apt/sources.list

Add these repositories;

deb http://ftp.debian.org/debian jessie-backports main

deb http://packages.dotdeb.org jessie all

We also need to add the GPG keys so the new repositories will work. Run these commands;

wget https://www.dotdeb.org/dotdeb.gpg
sudo apt-key add dotdeb.gpg

Update the package manager and upgrade any packages that are available;

apt-get update && apt-get upgrade

Now install all the packages we will need;

apt-get -y install fail2ban apache2 php7.0 php-pear php7.0-mysql php7.0-mcrypt php7.0-mbstring libapache2-mod-php7.0 php7.0-curl screenfetch htop nload curl git unzip ntp mcrypt postfix mailutils php7.0-memcached mysql-server && apt-get install python-certbot-apache -t jessie-backports && a2enmod rewrite && service apache2 restart && mysql_secure_installation

You will be prompted to create a mysql password which you will then immediately be asked for when configuring mysql server securely.

Name Thyself

Now navigate to the virtualhost directory;

cd /etc/apache2/sites-available

Remove the default ssl virtualhost. We will be creating a new one instead.

rm default-ssl.conf

Rename the default virtualhost to the fqdn of the server. Example: server3.website.com. Note that this is not the fqdn of the site(s) we are hosting on the server.

mv 000-default.conf [fqdn].conf

Edit the file and replace the admin email with your own. Change the DocumentRoot to /var/www instead of /var/www/html.

Now add the following block within the virtualhost tag of the file and save it.

<Directory “/var/www”>
AuthType Basic
AuthName “Restricted Content”
AuthUserFile /etc/apache2/.htpasswd
Require valid-user

Lock it Down

Let’s create a credential set for our new virtualhost. This is sort of a catch-all for any domains we point here which are not yet set up.

htpasswd -c /etc/apache2/.htpasswd [username]

You will be prompted for a password. This is very bruteforceable. My best practice is to use a very high entropy strings for both the username and the password. Typically at least 64 bits of random base 64 for each.

Apply Changes

We need to tell apache that we have changed the name of the default virtualhost file. First we disable the one we changed, and then enable our new one.

a2dissite 000-default

a2ensite [fqdn]

Now restart apache

service apache2 restart

Test our changes by navigating to the fdqn of the server. You should be prompted for a username and password.

Administrative Tools

We will need to put some tools in here so we can administer the server.


This will allow us to manage the databases we will be creating on the server. Head over to their website and get the download link for the current version.

Navigate to our new secure DocumentRoot directory and download that link.

cd /var/www && wget [link]

Now unzip it and remove the zip file we downloaded.

unzip [file] && rm [file]

Now that we have a PHPMyAdmin directory in our secure virtualhost, we need to configure it. Luckily it can do that itself! Use this command and enter the mysql root password when prompted.

mysql -uroot -p < /[unzipped phpmyadmin folder]/sql/create_tables.sql

The last thing PHPMyAdmin needs is a secret string. Edit the config file config.sample.inc.php and save it as nano config.inc.php.

Make sure to add a random string where prompted at the top of the file.

Postfix Outbound-Mail Server

We need to edit the config files for postfix and change the interface to loopback-only like so. We already set up a firewall rule to block connections to port 25, but those rules can be changed by mistake, so this will be a good second line of defense to prevent public access to sending mail through our server, while allowing us to still use it locally.

nano /etc/postfix/main.cf

Find this line;

inet_interfaces = all

And change to;

inet_interfaces =

Now edit the email aliases;

nano /etc/aliases

At the end of the file, make sure there is a line that starts with root and ends with your email, like so;

root: email@domain.com

Save the file and exit. Then run newaliases to let Postfix apply the changes. Restarting Postfix is not enough because we changed the interfaces line in the config file. We need to stop and start it like so;

newaliases && postfix stop && postfix start

Now our sites will be able to send emails!

VPS Home

This is something simple I built which serves as a better index page for the secure virtual host and includes several helpful tools for diagnostic purposes. To try it out, run this command from the DocumentRoot directory.

wget https://raw.githubusercontent.com/cjtrowbridge/vps-home/master/index.php


It’s helpful to be able to access details of the server’s php installation from this directory. I like to create a file called phpinfo.php which contains simply

<?php phpinfo();

Automatic Backups

Create a new file called /root/backup.sh and add the following to it. Make sure to replace the mysql password with yours.


#deletes old backups
find /var/www/backups/ -mindepth 1 -mmin +$((60*24)) -delete

#creates new backups
tar -cf “/var/www/backups/webs-$( date +’%Y-%m-%d_%H-%M-%S’ ).gz” /var/www/webs
/usr/bin/mysqldump -uroot -p[mysql root password] –all-databases | gzip > “/var/www/backups/mysql-$( date +’%Y-%m-%d_%H-%M-%S’ ).sql.gz”

Now edit the crontab with nano /etc/crontab and add this line. This will automatically run that script every day at 8pm.

0 20 * * * root /root/backup.sh > /dev/null 2>&1

Make sure to give the script permission to execute.

chmod 775 /root/backup.sh

Offsite Backups

Now that we have regular backups going, we need to regularly get them off the server. I like Bittorrent Sync for this. It is very fast and seamless.

Run these commands to install Bittorrent Sync;

sh -c “$(curl -fsSL http://debian.yeasoft.net/add-btsync14-repository.sh)”

apt-get update && apt-get install btsync

I recommend selecting the user and group www-data for btsync as this will greatly simplify administration and file permissions.

Then navigate the the port you set up during the installation process and create credentials. As always, be sure to use very high entropy credentials. Then create a shared folder for the backups, and copy the read-write key to your nas to securely copy the backups every day.

The really great thing about this approach is that when your cron job deletes the backups every day on the server, Bittorrent Sync will archive them on your nas so that they are always available. This means that at any point, you can simply take the automatically generated gz of your webs and mysql, and recreate your entire server in minutes.

Migrating Sites In

Move over the files for all the sites you want to host into individual directories in the /var/www/webs directory.

Now navigate to your virtualhosts directory.

cd /etc/apache2/sites-available

We created a default virtualhost file for the server and named it [fqdn].conf. This was the fqdn of the server, but not the sites it will host. Now we want to create our first hosted site. Copy the default file we made to create a new virtualhost like so…

cp [server fqdn].conf [site fqdn].conf

You can use any naming convention you like, but managing dozens or hundreds of these will become impossible if you are not naming them clearly.

Next, we need to add some new things to this hosted site fqdn. Add a new line inside the virtualhost tag like this;

ServerName [site fqdn]

And change the line which has DocumentRoot to point to the directory for this hosted site. For example;

DocumentRoot /var/www/webs/[site fqdn]

Lastly add these two blocks at the end of the file.

<Directorymatch “^/.*/\.git/”>
Order deny,allow
Deny from all

<Directory /var/www/webs/[site fqdn]>
Options FollowSymLinks
AllowOverride All
Require all granted

The first block will prevent anyone from navigating into a git repository and accessing sensitive data like credentials or from cloning the repository.

The second block will allow htaccess files or directory rewrites, and prevent directory listing. These are required changes if you want to host WordPress sites, and best practices all around.

Now we just need to enable these changes and make the site live with;

a2ensite [site fqdn] && service apache2 restart

From this point on, this new virtualhost can be copied to create new sites, rather than recreating each one from the original virtualhost file.

Free SSL

We already set up LetsEncrypt so now we just need to run it. Once the domains are set up and pointed to the server’s ip, along with a virtualhost being configured, all it takes is running certbot which takes care of everything.


Startup 6: Exotic Weapons

Today, there are many fascinating examples of people using successful techniques to create passive income online.

Those of us who grew up coding and then took that perspective to business have a unique advantage. You might call it a super power.

We, the digital priesthood conjure passive income from the ether with the aid of new information technologies.

In this new blog, we will explore the ways in which engineers are the new pioneers and magicians, drawing the first maps of the digital frontiers we create and using esoteric knowledge to pluck money out of thin air.

All of the examples we discuss on Exotic Weapons will come with enough information to do it yourself. (And without the sales pitch.)

Startup 5: Condensr

This is a free tool based on several other tools I have seen online. It accepts long-form text and condenses it to the length you specify.

The code is very simple, and very powerful. I was shocked at how easy this was to build. Check it out on Github, or head over to the site and start condensing!

Next Steps

I have started initial development of an API, and I want to add a feed of things people have Condensed as well as a bookmarlet tool for condensing news articles.

How to Build a Free Linux Microsoft SQL Server

This covers how to create a virtual linux server running Microsoft SQL Server.


First, create a virtual server with the following requirements in mind.

  • Ubuntu 14.02 LTS (Server or Desktop)
  • At least two CPUs
  • At least 4gb RAM
  • At least 10GB HDD for the operating system
  • PLUS at least double the amount of space your databases will use

When you install Ubuntu, make sure to enable updates and third-party software, unless you’re the real DIY-type.

First, update the default installation packages;

sudo apt-get update && sudo apt-get upgrade

Now we need to install a few tools before we can get started;

sudo apt-get install cifs-utils curl

Install Microsoft SQL Server

Run these commands to install Microsoft SQL Server and its tools;

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list

sudo apt-get update && sudo apt-get install -y mssql-server

sudo /opt/mssql/bin/mssql-conf setup

You will be prompted to create an SA or Server Administrator password. Use something with high entropy!

Now install tools;

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list

sudo apt-get update && sudo apt-get install mssql-tools unixodbc-dev

echo ‘export PATH=”$PATH:/opt/mssql-tools/bin”‘ >> ~/.bash_profile

echo ‘export PATH=”$PATH:/opt/mssql-tools/bin”‘ >> ~/.bashrc

source ~/.bashrc

Now, copy over your backup file and put it in /var/opt/mssql/data/

I have not gotten the cli tools to work for importing the backups. They seem to look for Windows paths. You will need to use MS SQL Studio to import the backup.