Startup 9: What are you wearing today?

Like many people, I am a student. I attend over a dozen classes at two different colleges. I also attend regular social events and networking meetups. I often wonder as I am dressing, is this the same thing I wore last time I went to the place I’m heading now?

It sounds silly, but it’s something I always ask myself because I want to make the right impression. This is further complicated by the fact that I am something of a minimalist, so I keep less than ten shirts at any given time and only a few pair of pants/ shorts.

I was talking one day to my mom about this anxiety I feel, and she emphatically agreed.

I decided to make a simple app to keep track of what I wear each day so that I can look back and see what I wore last time I was at a particular class or event and know not to wear the same thing again. It’s schmuck insurance.

Wearing.Today is the result!


The minimum viable product version of this app is not social; each user’s profile is private. Users can post pictures from a simple single-page app which also lets them edit each post’s blurb, or delete their posts.

Paradigm Shift

This app is written completely in the functional paradigm. I wrote it as a single page html/js application which is built to be hosted on S3 and take advantage of Lambda for the functions.

I did not actually deploy it there because I Don’t already have accounts with those services, and they don’t support my primary language, PHP. The point was not to actually do those things, but to rapidly write a simple, complete app in that paradigm and get a sense of the workflow and how to construct the API.

My Open Revenue Dashboard

Following as I often do, the example of Pieter Levels, I have decided to create an open and public page to show progress towards my revenue goals with several of my projects. This allows me to hold myself accountable and to show hard numbers related to the writing I’m doing.

Dashboards are a valuable tool which helps us to define clear and measurable metrics for concisely tracking and communicating progress towards goals. This is an excellent tool for business as well as for our personal lives.


I have seen several different approaches to accomplishing this type of dashboard. Many of them have included attempts to develop the dashboard itself from scratch in order to prove the ability to do so. Instead, I have decided to go for a serverless approach. I will simply use a Google sheets document and then publish a connected graph onto the page. I will simply add data to the spreadsheet and then Google will update the graph pictures automagically. No reason to overcomplicate this.

I want to follow two main metrics; total monthly revenue by project, and revenue mix.

You can view the dashboard here:

2019 Goals

  • Each of three projects produces at least 20% of total revenue.
  • Total monthly revenue from these new projects exceeds $1,000.

Most entrepreneurial thought leaders advise starting with a number for how much cashflow you need to live. This number is often called MRR or monthly recurring revenue. I am currently a student who does not need to have any income. Thus, I am less focused on the total than on coming up with a steady non zero amount for each project’s revenue. I have set an arbitrary goal by the end of the year of $1,000 MRR.

Startup 8: Stardate.Today

For years, I have calendared prolifically. (Is that a word?) I track all the granular details: my to-do list is on one calendar; all my classes and homework; my social life; gym, yoga and exercise; and previously my jobs at Tech 2u and Starbucks. Each of these topics is on its own color-coded calendar within Google Calendar. You can take a look at what my weeks look like at

There is a problem though. Sometimes I wonder what I accomplished on a particular day off or with my free time after work. I go back in my calendar to check, and there is nothing there; just a blank spot where I neglected to note the day’s events. It’s hard to quantify how well I use free time or time at work when I don’t have any record of what I accomplished. What if I could make like Janeway and just shout at the wall like it’s my journal?

Journaling always sounded interesting, but there’s no way I am going to lug around paper and a pen to write in it, and even if I did, it wouldn’t be searchable.

Enter Stardate.Today. This simple tool lets me type out my stream of consciousness into private posts, and then adds them to a timeline for me to search back through. They are also added to my calendar so I can see when each thing happened.

It takes just seconds to get started. Simply log in with google and you will be given a link which you can enter into your phone or any calendar tool. On the homepage, you are presented with a simple box to type in and a list of your past posts. When you enter a post, it is automatically added to your calendar.

Simple as that.

[Working Draft] Smarter Sockets

This is something I looked long and hard for, before deciding to build it myself. Also, a school project pulled this off the shelf and added some urgency. Last year I completed a similar but very early proof-of-concept prototype just for the web-controlled relay board. It didn’t have the parallel shift registers, the relay board, or the actual sockets. I never finished it or came close to building an actual final product.

[Take a better picture]

This prototype built on an Arduino controlled relay board which in turn drove a series of electrical sockets controlled by a web page over an Ethernet connection. [Feature link to commits where I shared this with the Arduino community.] But it also let’s you monitor consumption and prevents outages with a battery backup.

This project combines these things;

There is not currently any product that does this which I have been able to find. And as a minimalist who is committed to conserving energy, being independent, and being aware of my footprint, I was really hoping to find something like this out there.

Improving on Kill-A-Watt

Kill-a-watt is a very interesting product which nevertheless has some huge development opportunities.

It lets you see power usage for a single electric socket. There are several problems with this.

For one, it only works for one socket.

Two, it doesn’t let you see the data except through the limited interface. There are no graphs of usage over time and there is no web accessibility for the data.

Expanding on this idea, each socket on my new device will allow you to see power consumption over time. There will be a clean web-based interface which lets you see any unusual spikes and be responsible with your energy consumption. This will pair well with the smart-socket feature which will enable you to turn things off when you are not using them or when they are using too much energy.

Improving on Smart-Sockets

Smart sockets are very limited. They typically offer only one socket, and do not offer usage metrics. Also, they feature very poorly implemented security and control software.

[Explore Steve Gibson’s IoT security reviews]

Improving on these widespread industry problems will be an important and valuable step.

At the hardware level, offering multiple smart-sockets is already a huge improvement, as is offering usage metrics, but there is room to improve further. Another major feature of this project is feature granularity. I want to make sure and give enough detail so that developers can create multiple physical formats. Maybe you only want one socket, not eight. Why not?

There is no reason this system can not fit into a wall socket and replace the old-fashioned ones you already have. Imagine removing the mess of plugging devices into devices into the wall, and just put the smart-socket inside the wall.

[include diagram]

Including a UPS (Might take this out)

I started with this very thorough tutorial which does a great job of explaining the terms and options that differentiate existing UPS products.

I found a discarded 2kw UPS with a dead logic board at a computer repair shop which I was able to get for free. All 2 kilowatts of sealed lead-acid batteries worked fine, it was just a bad control board. :] This was exceptionally lucky, but you may be able to find something similar if you look.

I had also explored scavenging 18650 battery cells. These are very popular for DIY UPS builders. This great alternative option would also probably scale better than sealed lead-acid and charge or discharge much faster. There are lots of places like battery stores that will happily give you free, “dead” laptop batteries full of these cells. Typically it is just the control board and maybe one or two of a half-dozen cells which is actually bad. The rest will still work fine in most cases.

Choosing a UPS Paradigm

The linked tutorial describes three main types of UPS. I chose the Online type, “The Online UPS unit completely isolates the devices attached to it from the wall power. Instead of jumping into action at the first sign of power out or voltage regulation issues like the Standby and Line-Interactive units, the Online UPS unit continuously filters the wall power through the battery system. Because the attached electronics run completely off the battery bank (which is being perpetually topped off by the external power supply), there is never a single millisecond of power interruption when there is power loss or voltage regulation issues. The Online UPS unit, then, is effectively an electronic firewall between your devices and the outside world, scrubbing and stabilizing all the electricity your devices are ever exposed to.”

If it’s more expensive, why choose this type?

The goal of this project is radical energy independence. I want this to be expandable and compatible with eventual solar or wind power generation. This basically fills the same role as a Tesla Powerwall.

For reference, I found a great online community focused on cloning the Tesla Powerwall. There are lots of great ideas and examples in there.

Future Direction

There are tons of potential directions the project could go. For example, the smart-sockets could easily include powerline-wifi-adapters to replace wifi access points and greatly increase the wifi availability in your home while eliminating obnoxious and unnecessary hardware and wires.


This project includes a public repository of all the code and plans which anyone can contribute to, and a forum for discussing it. I will try to make it fairly modular and platform-agnostic so that people can use different hardware and still have a safe and secure system.

Building a MVP/POC


The core of the project is an Arduino/compatible processor and a network stack. This can be Wifi or Ethernet. There are some great examples which combine these parts together and even include the relay board if you would rather use that.

I will be using the brand-name Arduino Uno and the Arduino Ethernet Shield, only because I already had these. If I was buying them for this project, I would probably use the one I linked to in the previous paragraph because Wifi would be a great feature for this project.

This is going to take a lot of different parts which need to connect to the Arduino. We need to use a parallel shift register (Explanation) in order to control lots of things with just a few pins.

A simple relay driver board does the heavy lifting of turning the sockets on and off.

Measuring the current through each socket will require a series of special sensors wired inline and then connected to the Arduino. Alternatively, there are several other examples I am exploring for this part.

The main power will come from the battery bank and go through a cigarette lighter dc-ac converter before hitting the gfci socket and then the relay board. This makes the whole thing very safe because there are several circuit breakers built into each of these levels.

The battery bank will be charged by a standard ATX PC power supply which will automatically be turned on and off by the Arduino when the power level requires it. (This means we only get seven sockets since one of the eight relays will control the charging supply.

Future Steps

The most obvious future step would be making an actual ready-to-order product which people can buy. This would require a great deal of funding since there would be regulatory requirements and manufacturing costs, but I really think this is something people would buy.

If people are willing to pay $30 or more for only a single smart-socket which does not measure usage, it makes a lot of sense that people would be willing to pay even more for more features and expandability in a device which offers multiple sockets with valuable metrics about usage.

Startup 7: Top Story Review

This is part of a series on Building 12 Startups in 12 Months.

This is number seven:!

Black-Box News is Bad

If you look at the news on Google or Facebook , you will see a few stories which some mysterious algorithm has selected for you. Are these an accurate reflection of current events? No. These stories are often selected to confirm your biases based on your activity and search history. You are seeing the echo-chamber your digital context has created for you, because that is how these companies maximize for your attention in order capture your attention.

Facebook and Google use black-box algorithms to pick what you see. This means that not only is there no explanation of how or why your stories were picked, but there is no way even for the engineers to reverse-engineer the algorithm and see how or why it picked the stories it did.

It is very common to see stories featured as trending on Facebook which are false, or shared through other services. This effect has led to people doing horrible things based on false information presented as news by algorithms, like shooting up a pizza place or threatening and harassing the families of murdered children.

This is a problem for democracy and a problem for all of humanity. There has been much speculation that this has been a major contributing factor in the recent rise of populism in America and the results of the recent presidential election. Everyone should have access to concise, accurate snapshots of current events. My frustration with the lack of quality and lack of transparency in news aggregation services today led me to create an open-source alternative which omits biases and individual context.

But How?

This project expands on an experiment I started in high-school. I was trying to aggregate various high quality news sources and then determine what major themes were trending, and present that information in a useful way.

Back then, I started with fetching the “Top-Stories” or “Breaking News” RSS feeds from a few dozen newspapers around the US, and combining them all into a MySQL table and then doing word counts to determine which words stood out, and classifying those in order to build a list of general topics, then I displayed the most recent or most reputable source’s story for each of the topics.

There are several big innovations I have come up with over the years which I can now incorporate into this idea in order to maximize value.

The final product will be simple homepages for lots of topics with a few bullet-points, imparting a concise and accurate representation of the current state of events.

My Open Algorithm

  1. Pull in thousands of rss feeds from an open list of high-quality news sources all over the world.
  2. Analyze the stories to find trending topics using my open condensr algorithm.
  3. Get all the stories relating to each topic.
  4. Condense all of the stories on each topic down to just one sentence.

Step 4 expands on the work of groups like SMMRY and Reddit’s autotldr bot. My new Condense algorithm can summarize thousands of pages of text into just a sentence or two. This is obviously not perfect, but it is surprisingly good, and there is always room to improve later.


The basic structure of the site will be a home page which combines all topics, and then lots of individual pages for each category. The home page and each topic page has a bullet-point list of a few trending stories with one sentence summaries. These bullet-point summaries will eventually link to a story page with a longer summary along with links to recent reporting from various sources.

The algorithm runs every hour, and maintains an archive of all the pages, so you can also look back at what was happening at any certain time.

In effect, I am building a massive pipeline that takes in much of the world’s reporting and produces high quality condensed content which is much easier to absorb.

The Codebase

Here is the link to the codebase. It’s all there and basically you can just fork it and change anything you want. I think there is a lot of opportunity for this project to expand in interesting new directions.

Making Money

There will be an enormous amount of content created hourly, and that means lots of organic traffic. SEO and social integration will be critical. I will eventually include ads to monetize the content.

Perfect Server, Version 18

This is the latest iteration of my perfect server. I am building this in order to consolidate and deprecate previous server inventory. Also, it includes many new best-practices which should further secure this new server.


The first step is to provision a new server. I use digital ocean. I will be logged in as root for all of this since this is all stuff that needs to be done as root. If you don’t want to log in as root, you can instead use sudo at the beginning of each command.

Now, add some sources to the package manager. Get there with;

nano /etc/apt/sources.list

Add these repositories;

deb jessie-backports main

deb jessie all

We also need to add the GPG keys so the new repositories will work. Run these commands;

sudo apt-key add dotdeb.gpg

Update the package manager and upgrade any packages that are available;

apt-get update && apt-get upgrade

Now install all the packages we will need;

apt-get -y install fail2ban apache2 php7.0 php-pear php7.0-mysql php7.0-mcrypt php7.0-mbstring libapache2-mod-php7.0 php7.0-curl screenfetch htop nload curl git unzip ntp mcrypt postfix mailutils php7.0-memcached mysql-server && apt-get install python-certbot-apache -t jessie-backports && a2enmod rewrite && service apache2 restart && mysql_secure_installation

You will be prompted to create a mysql password which you will then immediately be asked for when configuring mysql server securely.

Name Thyself

Now navigate to the virtualhost directory;

cd /etc/apache2/sites-available

Remove the default ssl virtualhost. We will be creating a new one instead.

rm default-ssl.conf

Rename the default virtualhost to the fqdn of the server. Example: Note that this is not the fqdn of the site(s) we are hosting on the server.

mv 000-default.conf [fqdn].conf

Edit the file and replace the admin email with your own. Change the DocumentRoot to /var/www instead of /var/www/html.

Now add the following block within the virtualhost tag of the file and save it.

<Directory “/var/www”>
AuthType Basic
AuthName “Restricted Content”
AuthUserFile /etc/apache2/.htpasswd
Require valid-user

Lock it Down

Let’s create a credential set for our new virtualhost. This is sort of a catch-all for any domains we point here which are not yet set up.

htpasswd -c /etc/apache2/.htpasswd [username]

You will be prompted for a password. This is very bruteforceable. My best practice is to use a very high entropy strings for both the username and the password. Typically at least 64 bits of random base 64 for each.

Apply Changes

We need to tell apache that we have changed the name of the default virtualhost file. First we disable the one we changed, and then enable our new one.

a2dissite 000-default

a2ensite [fqdn]

Now restart apache

service apache2 restart

Test our changes by navigating to the fdqn of the server. You should be prompted for a username and password.

Administrative Tools

We will need to put some tools in here so we can administer the server.


This will allow us to manage the databases we will be creating on the server. Head over to their website and get the download link for the current version.

Navigate to our new secure DocumentRoot directory and download that link.

cd /var/www && wget [link]

Now unzip it and remove the zip file we downloaded.

unzip [file] && rm [file]

Now that we have a PHPMyAdmin directory in our secure virtualhost, we need to configure it. Luckily it can do that itself! Use this command and enter the mysql root password when prompted.

mysql -uroot -p < /[unzipped phpmyadmin folder]/sql/create_tables.sql

The last thing PHPMyAdmin needs is a secret string. Edit the config file and save it as nano

Make sure to add a random string where prompted at the top of the file.

Postfix Outbound-Mail Server

We need to edit the config files for postfix and change the interface to loopback-only like so. We already set up a firewall rule to block connections to port 25, but those rules can be changed by mistake, so this will be a good second line of defense to prevent public access to sending mail through our server, while allowing us to still use it locally.

nano /etc/postfix/

Find this line;

inet_interfaces = all

And change to;

inet_interfaces =

Now edit the email aliases;

nano /etc/aliases

At the end of the file, make sure there is a line that starts with root and ends with your email, like so;


Save the file and exit. Then run newaliases to let Postfix apply the changes. Restarting Postfix is not enough because we changed the interfaces line in the config file. We need to stop and start it like so;

newaliases && postfix stop && postfix start

Now our sites will be able to send emails!

VPS Home

This is something simple I built which serves as a better index page for the secure virtual host and includes several helpful tools for diagnostic purposes. To try it out, run this command from the DocumentRoot directory.



It’s helpful to be able to access details of the server’s php installation from this directory. I like to create a file called phpinfo.php which contains simply

<?php phpinfo();

Automatic Backups

Create a new file called /root/ and add the following to it. Make sure to replace the mysql password with yours.


#deletes old backups
find /var/www/backups/ -mindepth 1 -mmin +$((60*24)) -delete

#creates new backups
tar -cf “/var/www/backups/webs-$( date +’%Y-%m-%d_%H-%M-%S’ ).gz” /var/www/webs
/usr/bin/mysqldump -uroot -p[mysql root password] –all-databases | gzip > “/var/www/backups/mysql-$( date +’%Y-%m-%d_%H-%M-%S’ ).sql.gz”

Now edit the crontab with nano /etc/crontab and add this line. This will automatically run that script every day at 8pm.

0 20 * * * root /root/ > /dev/null 2>&1

Make sure to give the script permission to execute.

chmod 775 /root/

Offsite Backups

Now that we have regular backups going, we need to regularly get them off the server. I like Bittorrent Sync for this. It is very fast and seamless.

Run these commands to install Bittorrent Sync;

sh -c “$(curl -fsSL”

apt-get update && apt-get install btsync

I recommend selecting the user and group www-data for btsync as this will greatly simplify administration and file permissions.

Then navigate the the port you set up during the installation process and create credentials. As always, be sure to use very high entropy credentials. Then create a shared folder for the backups, and copy the read-write key to your nas to securely copy the backups every day.

The really great thing about this approach is that when your cron job deletes the backups every day on the server, Bittorrent Sync will archive them on your nas so that they are always available. This means that at any point, you can simply take the automatically generated gz of your webs and mysql, and recreate your entire server in minutes.

Migrating Sites In

Move over the files for all the sites you want to host into individual directories in the /var/www/webs directory.

Now navigate to your virtualhosts directory.

cd /etc/apache2/sites-available

We created a default virtualhost file for the server and named it [fqdn].conf. This was the fqdn of the server, but not the sites it will host. Now we want to create our first hosted site. Copy the default file we made to create a new virtualhost like so…

cp [server fqdn].conf [site fqdn].conf

You can use any naming convention you like, but managing dozens or hundreds of these will become impossible if you are not naming them clearly.

Next, we need to add some new things to this hosted site fqdn. Add a new line inside the virtualhost tag like this;

ServerName [site fqdn]

And change the line which has DocumentRoot to point to the directory for this hosted site. For example;

DocumentRoot /var/www/webs/[site fqdn]

Lastly add these two blocks at the end of the file.

<Directorymatch “^/.*/\.git/”>
Order deny,allow
Deny from all

<Directory /var/www/webs/[site fqdn]>
Options FollowSymLinks
AllowOverride All
Require all granted

The first block will prevent anyone from navigating into a git repository and accessing sensitive data like credentials or from cloning the repository.

The second block will allow htaccess files or directory rewrites, and prevent directory listing. These are required changes if you want to host WordPress sites, and best practices all around.

Now we just need to enable these changes and make the site live with;

a2ensite [site fqdn] && service apache2 restart

From this point on, this new virtualhost can be copied to create new sites, rather than recreating each one from the original virtualhost file.

Free SSL

We already set up LetsEncrypt so now we just need to run it. Once the domains are set up and pointed to the server’s ip, along with a virtualhost being configured, all it takes is running certbot which takes care of everything.


Startup 6: Exotic Weapons

Today, there are many fascinating examples of people using successful techniques to create passive income online.

Those of us who grew up coding and then took that perspective to business have a unique advantage. You might call it a super power.

We, the digital priesthood conjure passive income from the ether with the aid of new information technologies.

In this new blog, we will explore the ways in which engineers are the new pioneers and magicians, drawing the first maps of the digital frontiers we create and using esoteric knowledge to pluck money out of thin air.

All of the examples we discuss on Exotic Weapons will come with enough information to do it yourself. (And without the sales pitch.)

Startup 5: Condensr

This is a free tool based on several other tools I have seen online. It accepts long-form text and condenses it to the length you specify.

The code is very simple, and very powerful. I was shocked at how easy this was to build. Check it out on Github, or head over to the site and start condensing!

Next Steps

I have started initial development of an API, and I want to add a feed of things people have Condensed as well as a bookmarlet tool for condensing news articles.

How to Build a Free Linux Microsoft SQL Server

This covers how to create a virtual linux server running Microsoft SQL Server.


First, create a virtual server with the following requirements in mind.

  • Ubuntu 14.02 LTS (Server or Desktop)
  • At least two CPUs
  • At least 4gb RAM
  • At least 10GB HDD for the operating system
  • PLUS at least double the amount of space your databases will use

When you install Ubuntu, make sure to enable updates and third-party software, unless you’re the real DIY-type.

First, update the default installation packages;

sudo apt-get update && sudo apt-get upgrade

Now we need to install a few tools before we can get started;

sudo apt-get install cifs-utils curl

Install Microsoft SQL Server

Run these commands to install Microsoft SQL Server and its tools;

curl | sudo apt-key add –

curl | sudo tee /etc/apt/sources.list.d/mssql-server.list

sudo apt-get update && sudo apt-get install -y mssql-server

sudo /opt/mssql/bin/mssql-conf setup

You will be prompted to create an SA or Server Administrator password. Use something with high entropy!

Now install tools;

curl | sudo apt-key add –

curl | sudo tee /etc/apt/sources.list.d/msprod.list

sudo apt-get update && sudo apt-get install mssql-tools unixodbc-dev

echo ‘export PATH=”$PATH:/opt/mssql-tools/bin”‘ >> ~/.bash_profile

echo ‘export PATH=”$PATH:/opt/mssql-tools/bin”‘ >> ~/.bashrc

source ~/.bashrc

Now, copy over your backup file and put it in /var/opt/mssql/data/

I have not gotten the cli tools to work for importing the backups. They seem to look for Windows paths. You will need to use MS SQL Studio to import the backup.

How To Create a Local Storage Repository on XenServer

I was working with a XenServer in a complex corporate network environment, and it was not possible for this server to access any samba shares, such as my laptop. I needed to put some ISOs on it, so I decided to create a local storage repository. This way, I would be able to simply -wget an ISO from the web, and then use it locally.


First, SSH into the XenServer and create a directory for the repository;

mkdir -p /var/opt/xen/LocalRepo

Then, tell Xen to create a XenServer Storage Repository at that directory;

xe sr-create name-label=LocalRepo type=iso device-config:location=/var/opt/xen/LocalRepo device-config:legacy_mode=true content-type=iso

Now move to the directory and then wget whatever ISOs you need…

cd /var/opt/xen/LocalRepo



Now you’re cooking with gas!


PS. Make sure to check your free space and make sure your ISOs will fit. This partition is not very big by default.

df -H