Which Came First – The Chicken or the Egg?

One of the great things about having children is it reacquaints you with things you have not thought about for a long time. The old Chicken or the Egg paradox is one of those classic brain teasers that children of a certain age love. It is a really good one since the answer depends on how you parse the question. I thought I would list all the different answers my children and I could come up with.

Evolution 1 – Chicken

The first Chicken had to have hatched from an egg laid by a proto-chicken (i.e. a bird that was very similar to a chicken, but not actually a chicken). This means that the Chicken came before the first chicken Egg since only a chicken can lay a chicken egg.

Evolution 2 – Egg

If we consider that a chicken egg is an Egg that a Chicken hatches from then the Egg must comes first. It might have been laid by a Proto-Chicken, but out of this Egg hatched a Chicken.

Evolution 3 – Egg

The Egg is a much older than Chickens. What we now recognise as Eggs first appeared at least 300 million years ago. This was long before the first Chicken which is a domesticated version of the Indian Red Jungle Fowl from sometime in the last 10,000 years.

Evolution 4 – Unanswerable

Given that the definition of what separates a Chicken from a Proto-Chicken is undefined, it is not possible even in theory to say when the first Chicken hatched even if we had access to a time machine. If we can’t know when the first Chicken hatched we can’t answer the question.

Biblical – Chicken

According to Genesis 1 God created all the animals on Day 5 therefore the Chicken was created before the first Egg. It is an open question if the first Chickens were created with full formed eggs inside them and so if the first Egg was laid on Day 5 or not.

Word Order – Chicken

In the question “Which came first the Chicken or the Egg?”, the word Chicken precedes the word Egg.

Word Origin – Egg

The word Egg comes from Old Norse and ultimately back to the Proto-Germanic and before that Proto-Indo-European. It is a much older word than Chicken which is an Old English word of unknown origin.

English Language – Chicken

The original word for Egg in Old English was Ey and only in the development of Middle English did the Norse word egg become the common term. The word Chicken is from Old English and so it appeared first in the English language.

Dictionary – Chicken

In the English Dictionary the letter C comes before the letter E hence Chicken is first. The same applies to Encyclopaedias, although of course no child of today knows what an Encyclopaedia is.

Wikipedia – Chicken

The first entry for Egg was in 2005 while the first entry for Chicken was in 2004. Who would have guessed?

Finish Line – Chicken

In a race a Chicken will always beat an Egg to the finish line.

Drop Test – Egg

Chickens can fly so if you drop an Chicken and Egg off a barn roof together the Egg will hit the ground first. Chickens are surprising good flyers once they are allowed out to roam around for a few months.

There must be more !

The Last Word on Free Will

People have been arguing if free will exists or not for millennia with little progress. You have the Incompatibilists on one side arguing that free will and a deterministic universe can’t both be true, and the Compatibilists arguing that they can. While the heavy artillery appears to be on the side of Incompatibilism (the universe does appear to be deterministic), the inherent nihilism of Incompatibilism has meant most people have opted for some flavour of Compatibilism of varying sophistication. The arguments for both sides washes back and forth and we are no closer to an answer than the ancient Greeks.

Rather than approaching the question of free will from a philosophical perspective, we can just approach it empirically.

  1. The probability free will exists is greater than zero. Our knowledge of the universe is incomplete and so no matter how much evidence there is supporting a belief we can not apply a probability of zero to any hypothesis that negates this belief. All the evidence suggest fairies don’t exist at the bottom of the garden, but there is some finite probability that they do. In the case of free will this means that while all the evidence points to it not existing, we can not say with certainty it does not exist.
  2. If there is no free will then it meaningless what beliefs you hold about free will. There is nothing lost in life believing in free will if it doesn’t exist since whatever beliefs you have they were predetermined.
  3. If there is free will then believing there is no free will is throwing away your life. If free will exists and you go through life believing everything is predetermined then you will have missed making the choices open to you by free will. You may spend your life in a nihilistic funk when you could have chosen differently.

Given these three statements the only conclusion we can reach is we have to live as though free will exists even if everything we know points to it not existing. Nothing is lost believing in free will if it doesn’t exist, while everything of importance is lost if you don’t believe in free will and it exists. No matter how unlikely free will is, and it appears very unlikely, the conclusion doesn’t change – as long as our knowledge of the universe is incomplete the only rational action is to live as though free will exists.

While I am sure that I am not the first person to propose this solution to the free will problem, I have not been able to find who first proposed it. If anyone knows the source of this argument please leave a comment.

Easy Creme Brulee

Easy Creme Brulee

Easy Creme Brulee

For something so simple as creme brulee I wondered why so many recipes turn four ingredients and few basic steps into a cooking challenge. Here is my easy recipe for creme brulee adapted for Australian conditions. The key to success is to use high quality ingredients and avoid curdling the custard by overcooking. Vanilla bean paste tastes and looks the same as using whole vanilla beans, but it is a whole lot easier. Follow these simple steps and it is hard to go wrong.


  • 600 mL (1 large carton) of fresh single cream (35% milk fat).
  • 6 quality eggs.
  • 1/3 cup (75 g) of white sugar.
  • 1 teaspoon of vanilla bean paste (important).


  • Preheat the oven to 160˚C. Boil 2 L of water in an electric kettle.
  • Pour cream into a small saucepan and add the sugar. Heat slowly to the point just before boiling (don’t let the cream boil).
  • While the cream is heating seperate 6 egg yolks into a medium size bowl and add the vanilla bean paste. Mix with a hand whisk (or fork) until the vanilla and yolks are combined (10 seconds).
  • Once the cream reaches near boiling, remove and pour slowly into the yolk mixture whisking gently the whole time. This should take about 20 seconds.
  • Pour the thin custard through a fine sieve into a glass 1 L or larger jug. This will remove any egg lumps and make it easer to pour into the ramekins (i.e. the small ceramic bowls).
  • Place 4 to 6 ramekins in a metal baking tray that is 10-15cm deep. Pour the custard into each of the ramekins and place the whole tray in the hot oven.
  • Pour in the boiling water until it comes 2 cm from the top of the ramekins. It is easier to pour in the boiling water when the tray is in the oven than trying to move a tray filled to the brim with boiling water.
  • Bake for 30 min. Remove the ramekins and let cool on the bench (30 minutes).
  • Wrap each ramekin with plastic wrap and place in the fridge for at least 3 hours.
  • Just before serving remove from the fridge, add two teaspoons of any sugar on top of the custard, shake gently to evenly distribute the sugar, and then blacken the sugar with a butane blowtorch. You can also use the oven grill, but it will tends to heat the custard a bit more than using a blowtorch – also blowtorches are fun.
  • Serve immediately to your impressed guests or family.


Bitcoin is being set up to fail spectacularly

Bitcoin Price

Bitcoin Price 2017


Bitcoin is all the rage at the end of 2017. The interesting question is not why it has risen so high and so fast, but why it has not been made illegal. The most impressive observation about Bitcoin (and its block-chain brethren) is it has been allowed to run free, sucking in all and sundry, most who have no idea what a block-chain is, but who know their friends and neighbours have made a fortune from it. The mass media has been neutral-to-supportive of the speculation.

The question is why those that control the monetary system (i.e. the rich and powerful) have allowed this run given the revolutionary nature of Bitcoin? If Bitcoin succeeds they will lose control of their wealth and power to a bunch of computer anarchists with a cool idea.

The powerful if they wanted could shutdown Bitcoin and all the other block-chain currencies tomorrow – when you are using as much electricity as a medium-sized country you can’t really hide. Despite the risk nothing has happened. They are not stupid (well the people advising them anyway), so it does not make much sense that Bitcoin has been allowed to continue.

The only rational hypothesis I have been able to come up with is the intention is to ensure the general population do not just feel indifferent to Bitcoin (this would be the result of an earlier crackdown), but that they must totally hate it and the whole concept of the block-chain. Hate Bitcoin so much that no future idea like this can ever gain popular support.

With this in mind, the rise of Bitcoin makes much more sense. When the inevitable crash comes it will burn a huge number of ordinary people who have been sucked into the hype and speculative mania. The aim appears to ensure that Bitcoin (and by association all block-chain currencies) are seen as the greatest scam of the last 100 years.

If what I am suggesting is true then Bitcoin has some way to run yet (my guess is at least six months). The risk of overturning the current monetary system (and the wealth and power than comes from controlling it) is far too great to ever let any alternative arise. Bitcoin has to more than fail – it has to fail spectacularly. Everything is on track to ensure this outcome and the pain from the economic fallout will be long and deep (for the little people anyway).

Damnatio memoriae

The last few days have reminded me that we really need to stop “glorifying” the actions of mass murders and actually do something to prevent others repeating their actions. Making losers famous by mentioning them in the mass media just encourages more losers. We can learn from the past and the Roman damnatio memoriae is an approach we would be wise to revive.

Rather than giving those that have stepped outside society’s boundaries fame, let us instead remove them from history. Total and utter obliteration. We have the technology and legal authority to completely delete the historical existence of someone evil. Remove everything about them; their birth, schooling, job history, marriage(s), relationships, photos, phone records, emails, Facebook posts, even the banal like credit records or receipts from Walmart. Remove everything about them such that they effectively never existed as a person. Leave nothing. If we need to refer to their actions then give them a pseudonym such as the “butcher of X”. In a 100 years there will be no record or memory they ever existed while we still remember their victims.

It might seem impractical to follow such a path given the ubiquity of modern media, but in practice it is easier today to remove someone from history than it has ever been. A single authority with determination can track down and remove every fragment of an individual’s existence.

We must do something rather than wring our hands in despair and let history repeat. Let today be the last time evil has a name.

Dead simple ssh login monitoring with Monit and Pushover

Following on from my earlier post on how to set up Dead simple CentOS server monitoring with Monit and Pushover, I recently added monitoring for ssh logins. I wanted to be able to see who is logging into my servers and be notified if anyone not authorised gained access. If you already have set up a Monit and Pushover system then this just requires adding of an extra monit.conf file.

Create the ssh logins monit .conf file with the following.

# nano /etc/monit.d/ssh_logins.conf

check file ssh_logins with path /var/log/secure 
  #Ignore login's from whitelist ip addresses
  ignore match "/var/www/ignore_ips.txt"
  if match "Accepted publickey" then exec "/usr/local/bin/pushover.sh"
  if match "Accepted password" then exec "/usr/local/bin/pushover.sh"

If you want to be able to ignore logins from certain IP addresses (i.e. your own) then create a text file with the list of IP address to be ignored (one per line).

# nano /var/www/ignore_ips.txt

Check that all the .conf file are correct

# monit -t

If everything is fine then restart monitoring by reloading the new .conf files.

# monit reload

Now anytime someone logs in to the server you will be sent notification. The only downside is that notification takes around a minute to occur since the notification is only pushed once monit checks the secure logfile. It is possible to get instant notification by using pam_exec, but that is another post.

Easy Protection of File Downloads in WordPress


I recently wanted to protect some files from unauthorised download on a WordPress site, but still allow authorised users to easily access to the files.

The simplest solution I found was to put the files in custom directory, place the links to the files on a WordPress password protected page, and use a .htaccess file to limit access to the files to users who are logged in. This rather simple approach works rather well if you take a little care with the directory and/or file naming.

Here is the step-by-step guide.

1. Make a new directory on your site and upload the files you want to protect to this directory (using ftp or scp). Make sure you chose a directory name that is hard to guess. I would recommend a random string — something like “vg4thbspthdbd8th” — just don’t use this exact string!

mkdir /path_to_protected_directory/

2. ssh into the server and and create a .htaccess file in the protected directory using nano.

sudo nano /path_to_protected_directory/.htaccess

3. Copy and paste the following text into the .htaccess file.

Options -Indexes
php_flag engine off
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?yourwebsite.com [NC]
RewriteCond %{HTTP_COOKIE} !^.*wp-postpass.*$ [NC]
RewriteRule \.(zip|rar|exe|gz)$ - [NC,F,L]

4. Change the yourwebsite.com to your website’s actual name. You should also change the RewriteRule line to suit the content you wish to protect. Just add the extensions of any file type you want to protect from unauthorised download.

That is it.

The major limitation with this approach is the download protection depends on the content of the user’s cookies. Since these can be faked by the technically knowledgeable, the protection is not perfect.

This is not as big a problem as it might first appear, because as long as you give the files and/or the directory non-obvious names, any unauthorised user will not know the required path to the files. They will only know the correct paths if they can log in, and if they can do this, they don’t need to fake any cookies.

While not perfect, this approach should work well for the casual protection of unauthorised downloads, but don’t use it for very sensitive files!

Carnot Efficient Dyson Spheres are Undetectable by Infrared Surveys


An interesting series papers were published in The Astrophysical Journal in 2014 by J. T. Wright and colleagues who used data from the WISE and Spitzer wide-field infrared astronomical survey data sets to try to detect Dyson spheres [1-3]. While very thought provoking, the entire premise of their study rested on the assumption that the Dyson spheres created by advanced civilisations will radiate waste heat around 290K [2:2.6.4]. This assumption allowed them to hypothesise that Dyson spheres radiating waste heat at this temperature would show up as very bright infrared sources well above the 15-50K background emission from interstellar gas and dust clouds [2:2.6.4].

Wright et al. provided no detailed reason for assuming this waste heat value other than the Carnot efficiency of a Dyson sphere around a sun-like star is 0.95 at 290K [2:2.6.3]. They felt that this was a “reasonable” value to use, since in their opinion, it balanced the materials required to build a Dyson sphere with the overall Carnot efficiency [2:2.6.4]. An important question that needs to be considered is would any advanced civilisation capable of constructing Dyson spheres throwaway 5% of the potential energy available if this waste could be avoided? If we assume they could build more efficient Dyson spheres, would it be possible for us to detect them in the infrared spectrum above the background noise?

The Carnot efficiency of a Dyson sphere is determined by the Carnot equation η = 1 − Tw / T where T is the temperature of the star (5800K for a star like our sun) and Tw is the temperature of the waste energy emitted by the sphere [2:2.6.3]. To achieve a 95% Carnot efficiency around sun-like star a Dyson sphere needs to have a radius approximately that of Earth’s orbit (i.e. 1 AU) [2:2.6.3].

As the spheres diameter grows larger, the waste energy temperature becomes lower and the efficiency higher. For example, to achieve a Carnot efficiency of 99%, the Tw would need to be ~58K assuming a sun-like star. For a Dyson sphere to radiate at this temperature it would need to have a surface area 625 times greater than one that radiates at 290K (see equation 12 of [2]). This efficiency corresponds to a sphere with a radius of ~25 AU around sun-like stars.

For reasons unknown, Wright et al. decided to use a Carnot efficiency of 99.5% (with a corresponding Tw of 29K) in their counter example as to why 95% was a reasonable efficiency for any Dyson sphere building civilisation to use. They calculated that the sphere surface area to achieve this Carnot efficiency would need to have a surface area 10,000 times larger (100AU radius), but assumed that a Dyson sphere of this size would be impractical and hence only spheres with an efficiency of 0.95 would be built.

This is an unusual assumption to make since it means any advanced civilisation capable of building a Dyson sphere would have to waste 5% of the potential energy available. A 0.99 or better Carnot efficient sphere could be built using only a small fraction of the material resources available within our solar system [2]. If you are civilisation able to build a Dyson sphere the size of Earth’s orbit, then you would be able to build one larger and much more efficient using a relatively small increase in resources and time.

The consequences of this 0.95 efficiency choice is not minor. If Wright et al. had assumed Dyson spheres are 0.99 (or better) Carnot efficient then their emission spectra would not be detectable above the background infrared emissions of interstellar gas and dust – put simply, the emission signal from efficient Dyson spheres will be swamped by infrared noise in any wide-field infrared surveys.

Unfortunately this means that all we can conclude from Wright et al. study is that there are few (or no) Dyson spheres built with a 0.95 (or less) Carnot efficiency. If Dyson spheres do exist, and they are efficient (which we should expect of any advanced civilisation capable of building such spheres), we won’t be able to spot them via infrared astronomical surveys. The good news there is a different approach for finding efficient Dyson spheres, but that is another post.



2. Wright, J. T., Griffith, R. L.,  Sigurðsson, S., Povich, M. S., Mullan, B. (2014). THE Gˆ INFRARED SEARCH FOR EXTRATERRESTRIAL CIVILIZATIONS WITH LARGE ENERGY SUPPLIES. II. FRAMEWORK, STRATEGY, AND FIRST RESULT. The Astrophysical Journal: 792:27.

3. Griffith, R. L., Wright, J. T., Maldonado, J., Povich, M. S., Sigurdsson, S., Mullan, B. (2014). THE Ĝ INFRARED SEARCH FOR EXTRATERRESTRIAL CIVILIZATIONS WITH LARGE ENERGY SUPPLIES. III. THE REDDEST EXTENDED SOURCES IN WISEThe Astrophysical Journal: 792:28.

Dead simple CentOS server monitoring with Monit and Pushover


My company Nucleics has an array of servers distributed around the world to support our PeakTrace Basecaller. For historical reasons these servers are a mix of CentOS 6/7 VPS and physical servers supplied by three different companies. While the Auto PeakTrace RP application is designed to be robust in the face of server downtime, I wanted a dead simple monitoring service that would fix 99% of the server problem automatically and only contact me if there was something really wrong. After looking around all the paid services I settled on using a combination of Monit and Pushover.

Monit is an open source watchdog utility that can monitor other Linux services and automatically restart them if they crash or stop working. The great thing about monit is that you can set it up to fix things on its own. For example, if the server can be fixed by simply restarting apache then I want the monitoring service to just do this and only send me a message if something major has happened. I also wanted a service that would ping my phone, but where I could easily control it (i.e turn on/off, set away times, etc).

Pushover looked ideal for doing this. For a one off cost of $5 you can use the Pushover API to send up to 7500 message a month to any phone. It has lots of other nice features like quiet times and group notification. It comes with a 7 day free trial so you have time to make sure everything is going to work with your system before paying.

The only issue with integrating monit and pushover is that by default monit is set to email alert notices. Most of our servers don’t have the ability to email (they are slimmed down and are only running the services needs to support PeakTrace). Luckly, monit can also execute scripts so I settled on the alternative approach of calling the Pushover API via an alert script that would pass through exactly what server and service was having problems. This alert script is set to only be called if monit cannot fix the problem by restarting the service. After a bit of experimentation I got the whole system running rather nicely.

Here is the step-by-step guide. I did all this logged in as root, but if you don’t like to live on the edge just put sudo in front of every command.

Setting up Pushover

After registering an account at Pushover, and downloading the appropriate app for your phone (iOS or android), you need to set up a new pushover application on the Pushover website.

Click on Register an Application/Create an API Token. This will open the Create New Application/Plugin page.

  • Give the application a name (I called it Monit), but you call it anything you like.
  • Choose “script” as the type.
  • Add a description (I called it Monit Server Monitoring).
  • Leave the url field blank.
  • If you want you can add an icon, but you don’t need to do this. It is nice though having an icon when you get a message.
  • Press the Create Application button.

You need to record the new application API Token/Key as well as your Pushover User Key (you can find this on the main pushover page if you are logged in). You will need both these keys to have monit be able to ping Pushover via the alert script.

Install Monit

Install the EPEL package repository.

# yum install -y epel-release

Install monit and curl.

# yum install -y monit curl

Set monit to start on boot and start monit.

# chkconfig monit on && service monit start

You can edit the monif.conf file in /etc but the default values are fine. Take a look at the monit man page for more details about what you might want to change.

Create the Pushover Alert Script

You need to create the script that monit will call when it raises an alert.

# nano /usr/local/bin/pushover.sh

Paste the following text substituting your own API Token and User Keys before saving.

 /usr/bin/curl -s --form-string "token=API Token" \
 --form-string "user=User Key" \
 --form-string "message=[$MONIT_HOST] $MONIT_SERVICE - $MONIT_DESCRIPTION" \

Make the script executable.

# chmod 700 /usr/local/bin/pushover.sh

Test that the script works. If there are no issues the script will return without error and you will get an short message in the Pushover phone app almost immediately.

# /usr/local/bin/pushover.sh

Configure Monit

Once you have the pushover.sh alert script set up you need to create all the service-specific monit  .conf files. You can mix and match these to suit the services you are running on your server. The aim is to have monit restart the service if there are any issues and only if this does not solve the problem, call the pullover.sh alert script. This way most servers will fix themselves and you only get contacted if something catastrophic has happened.


# nano /etc/monit.d/system.conf

check system $HOST
if loadavg (5min) > 4 then exec "/usr/local/bin/pushover.sh"
if loadavg (15min) > 2 then exec "/usr/local/bin/pushover.sh"
if memory usage > 80% for 4 cycles then exec "/usr/local/bin/pushover.sh"
if swap usage > 20% for 4 cycles then exec "/usr/local/bin/pushover.sh"
if cpu usage (user) > 90% for 4 cycles then exec "/usr/local/bin/pushover.sh"
if cpu usage (system) > 80% for 4 cycles then exec "/usr/local/bin/pushover.sh"
if cpu usage (wait) > 80% for 4 cycles then exec "/usr/local/bin/pushover.sh"
if cpu usage > 200% for 4 cycles then exec "/usr/local/bin/pushover.sh"


# nano /etc/monit.d/apache.conf

check process httpd with pidfile /var/run/httpd/httpd.pid
start program = "/etc/init.d/httpd start" with timeout 60 seconds
stop program = "/etc/init.d/httpd stop"
if children > 250 then restart
if loadavg(5min) greater than 10 for 8 cycles then exec "/usr/local/bin/pushover.sh"
if failed port 80 for 2 cycles then restart
if 3 restarts within 5 cycles then exec "/usr/local/bin/pushover.sh"


# nano /etc/monit.d/sshd.conf

check process sshd with pidfile /var/run/sshd.pid
start program "/etc/init.d/sshd start"
stop program "/etc/init.d/sshd stop"
if failed port 22 protocol ssh then restart
if 5 restarts within 5 cycles then exec "/usr/local/bin/pushover.sh"


# nano /etc/monit.d/fail2ban.conf

check process fail2ban with pidfile /var/run/fail2ban/fail2ban.pid
start program "/etc/init.d/fail2ban start"
stop program "/etc/init.d/fail2ban stop"
if 5 restarts within 5 cycles then exec "/usr/local/bin/pushover.sh"


# nano /etc/monit.d/syslog.conf

check process rsyslog with pidfile /var/run/syslogd.pid
start program "/etc/init.d/rsyslog start"
stop program "/etc/init.d/rsyslog stop"
if 5 restarts within 5 cycles then exec "/usr/local/bin/pushover.sh"


# nano /etc/monit.d/crond.conf

check process crond with pidfile /var/run/crond.pid
start program "/etc/init.d/crond start"
stop program "/etc/init.d/crond stop"
if 5 restarts within 5 cycles then exec "/usr/local/bin/pushover.sh"


# nano /etc/monit.d/mysql.conf

check process mysqld with pidfile /var/run/mysqld/mysqld.pid
start program = "/etc/init.d/mysqld start"
stop program = "/etc/init.d/mysqld stop"
if failed host port 3306 then restart
if 5 restarts within 5 cycles then exec "/usr/local/bin/pushover.sh"

Check that all the .conf file are correct

# monit -t

If everything is fine then start monitoring by loading the new .conf files.

# monit reload

Check the status of monit by using

# monit status

This should give you something like this depending on which services you are monitoring.

The Monit daemon 5.14 uptime: 3d 20h 17m

System 'rps.peaktraces.com'
 status Running
 monitoring status Monitored
 load average [0.00] [0.12] [0.11]
 cpu 0.2%us 0.1%sy 0.0%wa
 memory usage 106.6 MB [10.7%]
 swap usage 0 B [0.0%]
 data collected Tue, 19 Jul 2016 04:16:06

Process 'rsyslog'
 status Running
 monitoring status Monitored
 pid 1016
 parent pid 1
 uid 0
 effective uid 0
 gid 0
 uptime 4d 23h 33m
 children 0
 memory 3.4 MB
 memory total 3.4 MB
 memory percent 0.3%
 memory percent total 0.3%
 cpu percent 0.0%
 cpu percent total 0.0%
 data collected Tue, 19 Jul 2016 04:16:06

Process 'sshd'
 status Running
 monitoring status Monitored
 pid 1176
 parent pid 1
 uid 0
 effective uid 0
 gid 0
 uptime 4d 23h 33m
 children 4
 memory 1.2 MB
 memory total 20.7 MB
 memory percent 0.1%
 memory percent total 2.0%
 cpu percent 0.0%
 cpu percent total 0.0%
 port response time 0.006s to [localhost]:22 type TCP/IP protocol SSH
 data collected Tue, 19 Jul 2016 04:16:06

Process 'fail2ban'
 status Running
 monitoring status Monitored
 pid 1304
 parent pid 1
 uid 0
 effective uid 0
 gid 0
 uptime 4d 23h 33m
 children 0
 memory 30.2 MB
 memory total 30.2 MB
 memory percent 3.0%
 memory percent total 3.0%
 cpu percent 0.1%
 cpu percent total 0.1%
 data collected Tue, 19 Jul 2016 04:16:06

Process 'crond'
 status Running
 monitoring status Monitored
 pid 1291
 parent pid 1
 uid 0
 effective uid 0
 gid 0
 uptime 4d 23h 33m
 children 0
 memory 1.2 MB
 memory total 1.2 MB
 memory percent 0.1%
 memory percent total 0.1%
 cpu percent 0.0%
 cpu percent total 0.0%
 data collected Tue, 19 Jul 2016 04:16:06

Process 'httpd'
 status Running
 monitoring status Monitored
 pid 20963
 parent pid 1
 uid 0
 effective uid 0
 gid 0
 uptime 4h 5m
 children 2
 memory 7.7 MB
 memory total 19.0 MB
 memory percent 0.7%
 memory percent total 1.9%
 cpu percent 0.0%
 cpu percent total 0.0%
 data collected Tue, 19 Jul 2016 04:16:06


You may want to adjust the system.conf values if your server is under sustained high loads so as to scale back on the pushover triggers. Since you will know exactly what is the trigger this is quite easy to do.

To create a monit .conf file for a new services you just need to make sure that you use the correct .pid file path for the service and that the start and stop paths are correct. These can be a little non-obvious (look at syslog.conf for example). If you do make a mistake monit -t and monit status will show you what is wrong.

Once you have all this in place then sit back, relax and let the servers take care of themselves (well we can all dream).

Edit July 2017. I have been using this system for over a year now and it has been working great. I have had no problem that monit has not fixed by itself by just restarting the service. About the only issue I have had is load spikes on the server caused by a runaway service not monitored.

I have recently used the same approach to monitor for unauthorised logins which I wrote up Dead simple ssh login monitoring with Monit and Pushover.

A Quick & Dirty Analysis of Apply HN

H4160-L83370839Y Combinator recently launched a new initiative where they asked the Hacker News community to identify promising startups to fund via the YC fellowship program.  This program provides US$20,000 to very early/idea stage startups to build their prototype/MVP. Potential participants were asked to post their concept to Apply HN and members of the HN community were asked to discuss the concepts and make ‘nice’ suggestions. The two best public applications will be funded by YC at the end of the month.

While a very interesting experiment in itself  (I really do applaud YC for actually trying new ideas in the VC world), the most fascinating aspect of this experiment is it give us an all too rare access to the VC pitch firehose. The investment community rarely (never) shares the raw data of what pitches they see come across their desk and we are left only seeing the end product (the startups they fund). As a founder of a startup you really have no way of knowing what competition you are facing for investor time and dollars. Is your startup the next great opportunity or just another lost cause doomed to failure?

To answer this question I did a quick and dirty analysis of all 194 applications (minus my own) as of 11.30am 12th April AEST. I read carefully through every application and all the comments (this took me a bit over 6 hours) and sorted the applications into one of ten categories (see below) on the basis of their investment potential (Figure 1). Many startups fell into more than one category (e.g. non profit and network required) and in these cases I sorted them on the basis of what I believed was the primary category. After completing the analysis I randomly selected 25 applications from the pool and blindly reclassified them. The two classifications agreed in every case giving me faith that while my classification process may be invalid, it is at least replicable.

Categorization of Apply HN applications by investment potential.

Figure 1. Categorization of Apply HN applications by investment potential.


Non Profit (48)

These startups were either explicitly not for profit, or implicitly not for profit in the sense that there was no way of them ever making a profit from their product or service. Many were attempts to scratch an itch of the founder(s), but none in this category appeared to answer the most fundamental question any investor will ask – how are you going to make money?

Network Required (41)

This was the second largest category. There were many startups proposed or begun that would be great business if the founders could get 10 million users — the problem is they all had no way to get to this point other than to hope if they build it people will come. I am highly sceptical that it is possible to build a mass market network based business today and most niches markets still exploitable are too small to offer the returns the investment community requires. It is a huge task to build a new network business – I am not saying it is impossible, but it is going to be very hard convincing investors you can do it and make a profit unless you have something really compelling.

Existing Players (31)

A surprising number of application were me-too startups with one or more strong existing competitors dominate the market and where the proposed offering was not at least 10 times better (the comments were great for drawing this out). It is fantastic to have a product where the market already exists since you don’t need to create a market, but your product needs to be significantly better if you want customers to switch. Just being a little better than the competition is not enough.

Lifestyle Business (30)

These I classified as having the potential to be good businesses, but not ever make the sort of returns required by the VC investor community. There were quite a few great startups and ideas in this category, but the accessible market (even allowing for later expansion) is just too niche. I am personally highly supportive of founders developing lifestyle businesses, but if your startup can only ever make a profit of a few million dollars a year (if everything goes right) it won’t interest most investors.

Feature Not Business (17)

In this category there were lots of great ideas, but they just weren’t big enough to sustain a business. You don’t want to build your business providing something that can be easily replicated by one of the big players.

Not Scalable (14)

It is fine to do things that don’t scale when you are building your startup, but if your processes can never be automated and will always need highly skilled labor the business will not be able to expand into a billion dollars business. These types of startups can make great lifestyle businesses if the margins are high, but trying to develop a non-automatable technology business which only offers low margins is a slow and nasty way to go broke.

Too Big (5)

These were ideas (some great) that were just too big for the YC fellowship program. If your startup is going to need $100 million to create the prototype then you are going to be in for a hard slog finding investors who will back you. The way to approach these sort of ideas is the way Elon Musk build Space X – start small in other businesses and as you gain credibility and success investors will then be willing to back you in your big ideas. Dream big, but take small steps.

Troll (5)

Not all applications were serious proposals, although some were amusing.

Biotech (2)

There were a couple of biotech/medical device startups in the list. This industry is notorious for losing money and unless you are very knowledgeable one that should be avoided as an investor.

Unicorn Embryo (0)

These were startups that had the potential to be worth over a billion dollars if everything went right. They needed to be tackling a multibillion dollar market in a technologically innovative manner and have a plausible plan on how to grow and defend this market. This is the sort of startup investors want to back. Unfortunately there were none.

I should add for conflict of interest reasons my own application was not assessed. I will leave it to others to decide what category it should be in.


I was most surprised to see how little emphasis applicants placed on eventual profitably. Yes it is fine to make a loss when launching, but you have to have a credible plan for how you will make a profit at some point in the future. I was also surprised to see how many network requiring applications seemed to have no viable plan for how to grow their network beyond making something cool and hoping the masses will come. Network based businesses are so valuable because they are so hard to create.

All in all this experiment has been very valuable and I thank YC for running it even if they don’t find a unicorn embryo to invest in.


Apply HN ended with a short list of 20 applications and interestingly all 20 were from the 194 I reviewed here (there were a total of 343 applications). While the HN voting selection process ended in controversial circumstances over Pinboard, I was very honoured to have made the shortlist with my idea only application – TruSert.