Once a startup's MVP has been released, there's always the question of what's next.
A lot of startups tend to use the time after sorting out the quirks and the bugs to develop more features and enrich the product.
But sometimes, adding more features to your product before even finding out what works will only distract your new users, who are trying to get to know your product.
A good example might be Google Plus vs. Twitter.
Twitter has remained essentially the same as it's been since its inception. Google Plus has overhauled completely more than once, and is crammed with tons of features. While Google stuffed its social network with more and more features (Share and video and an amazing image viewer and more menus and what not), and how to implement them different than Facebook, Twitter focused on refining its mobile and web experience.
So next time your R&D department got spare time, have them improve performance, clean up code and take care of automation and scale rather than add more buttons and features that might just make your product too rich and too redundant.
Your users will appreciate more if they wait 1 second instead of 5 for a response rather than another copycat feature.
Tuesday, January 14, 2014
Sunday, January 12, 2014
DO NOT UPGRADE!
Too often, when an application fails on memory or performs poorly, the immediate solution would be upgrading the machine.
Add more cores, more memory - And think that this would clear the problem from under the rug.
This strategy actually causes even more damage - As you will have to deal with the real issue (YOUR CODE...) later, when the system has more data, more angry clients, and more code that breaks.
If your application consumes too much CPU, you should profile it and solve the problem.
If your application leaks memory, you should find out where.
Taking the lazy approach would cause you credibility issues once you will really need that upgrade.
Add more cores, more memory - And think that this would clear the problem from under the rug.
This strategy actually causes even more damage - As you will have to deal with the real issue (YOUR CODE...) later, when the system has more data, more angry clients, and more code that breaks.
If your application consumes too much CPU, you should profile it and solve the problem.
If your application leaks memory, you should find out where.
Taking the lazy approach would cause you credibility issues once you will really need that upgrade.
Wednesday, January 1, 2014
Keep your data aligned!
Many big data systems analyse large, periodical data streams.
These data streams are sometimes event based (i.e. - Add an entry whenever a user visits a page, performs an operation etc.), and sometimes 'sample' based (i.e. - Measure CPU level every 5 seconds).
Sometimes, your sampling can be unreliable - For example, when monitoring activity via WAN.
Then you get 'holes' in your data stream. These holes cause problems when analysing your data.
Several companies I know have been known to develop utility functions, which periodically go over the data streams and 'fix' these holes. These functions are usually costly, as finding such holes could get complicated and performance demanding in large datasets. Fixing them (Especially if it's an 'update' operation) is also costly.
My suggestion: Fix the problem before it rises. Keep your data aligned before you insert data into the database - Whenever there's a missed reading, fix it in the next reading by keeping track of your last reading's timing.
You might find it cheaper to hold a pre-input processing machine for this than requiring a huge server for your database because it needs to align data over night.
These data streams are sometimes event based (i.e. - Add an entry whenever a user visits a page, performs an operation etc.), and sometimes 'sample' based (i.e. - Measure CPU level every 5 seconds).
Sometimes, your sampling can be unreliable - For example, when monitoring activity via WAN.
Then you get 'holes' in your data stream. These holes cause problems when analysing your data.
Several companies I know have been known to develop utility functions, which periodically go over the data streams and 'fix' these holes. These functions are usually costly, as finding such holes could get complicated and performance demanding in large datasets. Fixing them (Especially if it's an 'update' operation) is also costly.
My suggestion: Fix the problem before it rises. Keep your data aligned before you insert data into the database - Whenever there's a missed reading, fix it in the next reading by keeping track of your last reading's timing.
You might find it cheaper to hold a pre-input processing machine for this than requiring a huge server for your database because it needs to align data over night.
Subscribe to:
Posts (Atom)