Entries in Python (16)

Tuesday
Sep162008

Product: Func - Fedora Unified Network Controller

Func is used to manage a large network using bash or Python scripts. It targets easy and simple remote scripting and one-off tasks over SSH by creating a secure (SSL certifications) XMLRPC API for communication. Any kind of application can be written on top of it. Other configuration management tools specialize in mass configuration. They say here's what the machine should look like and keep it that way. Func allows you to program your cluster. If you've ever tried to securely remote script a gang of machines using SSH keys you know what a total nightmare that can be. Some example commands:

Using the command line:
func "*.example.org" call yumcmd update

Using the Pthon API:
import func.overlord.client as fc
client = fc.Client("*.example.org;*.example.com")
client.yumcmd.update()
client.service.start("acme-server")
print client.hardware.info()
Func may certainly overlap in functionality with other tools like Puppet and cfengine, but as programmers we always need more than one way to do it and definitely see how I could have used Func on a few projects.

Related Articles
  • High Scalability Operations Tag
  • Open source project: Func, the Fedora Unified Network Controller by Michael DeHaan.
  • Func, the Fedora Unified Network Controlle by Luca Foppiano
  • Multi-system administration with Func by Jake Edge.

    Click to read more ...

  • Wednesday
    Sep032008

    MapReduce framework Disco

    Disco is an open-source implementation of the MapReduce framework for distributed computing. It was started at Nokia Research Center as a lightweight framework for rapid scripting of distributed data processing tasks. The Disco core is written in Erlang. The MapReduce jobs in Disco are natively described as Python programs, which makes it possible to express complex algorithmic and data processing tasks often only in tens of lines of code.

    Click to read more ...

    Wednesday
    Jun112008

    Pyshards aspires to build sharding toolkit for Python

    I've been interested in sharding concepts since first hearing the term "shard" a few years back. My interest had been piqued earlier, the first time I read about Google's original approach to distributed search. It was described as a hashtable-like system in which independent physical machines play the role of the buckets. More recently, I needed the capacity and performance of a Sharded system, but did not find helpful libraries or toolkits which would assist with the configuration for my language of preference these days, which is Python. And, since I had a few weeks on my hands, I decided I would begin the work of creating these tools. The result of my initial work the Pyshards project, a still-incomplete python and MySQL based horizontal partitioning and sharding toolkit. HighScalability.com readers will already know that horizontal partitioning is a data segmenting pattern in which distinct groups of physical row-based datasets are distributed across multiple partitions. When the partitions exist as independent databases and when they exist within a shared-nothing architecture they are known as shards. (Google apparently coined the term shard for such database partitions, and pyshards has adopted it.) The goal is to provide big opportunities for database scalability while maintaining good performance. Sharded datasets can be queried individually (one shard) or collectively (aggregate of all shards). In the spirit of The Zen of Python, Pyshards focuses on one obvious way to accomplish horizontal partitioning, and that is by using a hash/modulo based algorithm. Pyshards provides the ability to reasonably add polynomial capacity (number of original shards squared) without re-balancing (re-sharding). Pyshards is designed with re-sharding in mind (because the time will come when you must re-balance) and provides re-sharding algorithms and tools. Finally, Pyshards aspires to provide a web-based shard monitoring tool so that you can keep an eye on resource capacity. So why publish an incomplete open source project? I'd really prefer to work with others who are interested in this topic instead of working in a vacuum. If you are curious, or think you might want to get involved, come visit the project page, join a mailing list, or add a comment on the WIKI. http://code.google.com/p/pyshards/wiki/Pyshards Devin

    Click to read more ...

    Tuesday
    Apr082008

    Google AppEngine - A First Look

    I haven't developed an AppEngine application yet, I'm just taking a look around their documentation and seeing what stands out for me. It's not the much speculated super cluster VM. AppEngine is solidly grounded in code and structure. It reminds me a little of the guy who ran a website out of S3 with a splash of Heroku thrown in as a chaser. The idea is clearly to take advantage of our massive multi-core future by creating a shared nothing infrastructure based firmly on a core set of infinitely scalable database, storage and CPU services. Don't forget Google also has a few other services to leverage: email, login, blogs, video, search, ads, metrics, and apps. A shared nothing request is a simple beast. By its very nature shared nothing architectures must be composed of services which are themselves already scalable and Google is signing up to supply that scalable infrastructure. Google has been busy creating a platform of out-of-the-box scalable services to build on. Now they have their scripting engine to bind it all together. Everything that could have tied you to a machine is tossed. No disk access, no threads, no sockets, no root, no system calls, no nothing but service based access. Services are king because they are easily made scalable by load balancing and other tricks of the trade that are easily turned behind the scenes, without any application awareness or involvement. Using the CGI interface was not a mistake. CGI is the perfect metaphor for our brave new app container world: get a request, process the request, die, repeat. Using AppEngine you have no choice but to write an app that can be splayed across a pointy well sharpened CPU grid. CGI was devalued because a new process had to be started for every request. It was too slow, too resource intensive. Ironic that in the cloud that's exactly what you want because that's exactly how you cause yourself fewer problems and buy yourself more flexibility. The model is pure abstraction. The implementation is pure pragmatism. Your application exists in the cloud and is in no way tied to any single machine or cluster of machines. CPUs run parallel through your application like a swarm of busy bees while wizards safely hidden in a pocket of space-time can bend reality as much as they desire without the muggles taking notice. Yet the abstraction is implemented in a very specific dynamic language that they already have experience with and have confidence they can make work. It's a pretty smart approach. No surprise I guess. One might ask: is LAMP dead? Certainly not in the way Microsoft was hoping. AppEngine is so much easier to use than the AWS environment of EC2, S3, SQS, and SDB. Creating an app in AWS takes real expertise. That's why I made the comparison of AppEngine to Heroku. Heroku is a load and go approach for RoR whereas AppEngine uses Python. You basically make a Python app using services and it scales. Simple. So simple you can't do much beyond making a web app. Nobody is going to make a super scalable transcoding service out of AppEngine. You simply can't load the needed software because you don't have your own servers. This is where Amazon wins big. But AppEngine does hit a sweet spot in the market: website builders who might have previously went with LAMP. What isn't scalable about AppEngine is the scalability of the complexity of the applications you can build. It's a simple request response system. I didn't notice a cron service, for example. Since you can't write your own services a cron service would give you an opportunity to get a little CPU time of your own to do work. To extend this notion a bit what I would like to see as an event driven state machine service that could drive web services. If email needs to be sent every hour, for example, who will invoke your service every hour so you can get the CPU to send the email? If you have a long running seven step asynchronous event driven algorithm to follow, how will you get the CPU to implement the steps? This may be Google's intent. Or somewhere in the development cycle we may get more features of this sort. But for now it's a serious weakness. Here's are a quick tour of a few interesting points. Please note I'm copying large chunks of their documentation in this post as that seems the quickest way to the finish line...

  • Very nice project page at Google App Engine. Already has a FAQ, articles, blog, forums, example applications, nice tutorial, and a nice touch is how to work with Django. Some hard chargers are already posting questions to the forum.
  • Python only. More languages will follow. As you are uploading clear text into the engine there's no hiding from mother Google.
  • You aren't getting root. Applications run in a sandbox, which is a secure environment that provides limited access to the underlying operating system. These limitations allow App Engine to distribute web requests for the application across multiple servers, and start and stop servers to meet traffic demands. - An application can only access other computers on the Internet through the provided URL fetch and email services and APIs. Other computers can only connect to the application by making HTTP (or HTTPS) requests on the standard ports. - An application cannot write to the file system. An app can read files, but only files uploaded with the application code. The app must use the App Engine datastore for all data that persists between requests. - Application code only runs in response to a web request, and must return response data within a few seconds. A request handler cannot spawn a sub-process or execute code after the response has been sent.
  • The data access trend continues with the RDBMS being dissed infavor of a properties type interface. - The datastore is not like a traditional relational database. Data objects, or "entities," have a kind and a set of properties. Queries can retrieve entities of a given kind filtered and sorted by the values of the properties. Property values can be of any of the supported property value types. - The datastore uses optimistic locking for concurrency control. An update of a entity occurs in a transaction that is retried a fixed number of times if other processes are trying to update the same entity simultaneously. - They have some notion of transaction: The datastore implements transactions across its distributed network using "entity groups." A transaction manipulates entities within a single group. Entities of the same group are stored together for efficient execution of transactions. Your application can assign entities to groups when the entities are created.
  • You've got mail: Applications can also send email messages using App Engine's mail service. The mail service also uses Google infrastructure to send email messages. If you've ever been marked a spammer because you send a little email, this is actually a nice feature.
  • It's mostly free for now: 500MB of storage, up to 5 million page views a month, and 10GB bandwidth per day. Additional resources will be available for $$$.
  • Limits exist on various features. If a request takes too long it's killed. You can only get 1,000 results at a time. That sort of thing. Pretty reasonable.
  • Developers must download a Windows, Mac OS X or Linux SDK. Python 2.5 is required.
  • The SDK includes a web server application that simulates the App Engine environment. So this in the RoR and GWT type mold where you have a nice local development environment that emulates what happens in the deployment environment.
  • Google App Engine supports any framework written in pure Python that speaks CGI (and any WSGI-compliant framework using a CGI adaptor), including Django, CherryPy, Pylons, and web.py. You can bundle a framework of your choosing with your application code by copying its code into your application directory.
  • Google has their own framework called webapp. Nice MS style naming.
  • Here's a hello world application using webapp:
    import wsgiref.handlers
    
    from google.appengine.ext import webapp
    
    class MainPage(webapp.RequestHandler):
      def get(self):
        self.response.headers['Content-Type'] = 'text/plain'
        self.response.out.write('Hello, webapp World!')
    
    def main():
      application = webapp.WSGIApplication(
                                           [('/', MainPage)],
                                           debug=True)
      wsgiref.handlers.CGIHandler().run(application)
    
    if __name__ == "__main__":
      main()
    
    This code defines one request handler, MainPage, mapped to the root URL (/). When webapp receives an HTTP GET request to the URL /, it instantiates the MainPage class and calls the instance's get method. Inside the method, information about the request is available using self.request. Typically, the method sets properties on self.response to prepare the response, then exits. webapp sends a response based on the final state of the MainPage instance. The application itself is represented by a webapp.WSGIApplication instance. The parameter debug=true passed to its constructor tells webapp to print stack traces to the browser output if a handler encounters an error or raises an uncaught exception. You may wish to remove this option from the final version of your application.
  • Google is standardizing components on their infrastructure. Take the login interface. When your application is running on App Engine, users will be directed to the Google Accounts sign-in page, then redirected back to your application after successfully signing in or creating an account.
  • Forms looks normal. Lots of embedded html. Take a look. Python like Perl has a nice bulk string handling syntax so this style isn't as fugly as it would be in C++ or Java.
  • Database access is built around Data Models: A model describes a kind of entity, including the types and configuration for its properties. Here's a taste:
    Example of creation:
    from google.appengine.ext import db
    from google.appengine.api import users
    
    class Pet(db.Model):
      name = db.StringProperty(required=True)
      type = db.StringProperty(required=True, choices=set("cat", "dog", "bird"))
      birthdate = db.DateProperty()
      weight_in_pounds = db.IntegerProperty()
      spayed_or_neutered = db.BooleanProperty()
      owner = db.UserProperty()
    
    pet = Pet(name="Fluffy",
              type="cat",
              owner=users.get_current_user())
    pet.weight_in_pounds = 24
    pet.put()
    
    Example of get, modify, save:
    if users.get_current_user():
      user_pets = db.GqlQuery("SELECT * FROM Pet WHERE pet.owner = :1",
                              users.get_current_user())
      for pet in user_pets:
        pet.spayed_or_neutered = True
    
      db.put(user_pets)
    
    Looks like your normal overly complex data access. Me, I appreciate the simplicity of a string based property interface.
  • You can use Django's HTML Template system.
  • Static files are served using automated mapping mechanism. You don't get local disk store for your css, flash, and js files.
  • Applications are loaded using a command line tool: appcfg.py update helloworld/.
  • Applications are accessed like: http://application-id.appspot.com. You get your domain name.
  • There's a dashboard that has six graphs that give you a quick visual reference of your system usage: Requests per Second, Errors per Second, Bytes Received per Second, Bytes Sent per Second, Megacycles per Second (The amount of CPU megacyles your application uses every second), Milliseconds Used per Second, Number of Quota Denials per Second. I have no idea what a megacycle is either. I think it's bigger than a pint of beer.
  • Also I wonder if this is meant to compete with Facebook more than Amazon?
  • Developers with a lot of little projects will find AppEngine especially useful, which always leaves open a Adoption Led Market play.

    Related Articles

  • HighScalability: Rumors of Signs and Portents Concerning Freeish Google Cloud.
  • Techcrunch: Google Jumps Head First Into Web Services With Google App Engine.
  • HighScalability: Heroku - Simultaneously Develop and Deploy Automatically Scalable Rails Applications in the Cloud.
  • Video of the announcement at Camp David, er CampFireOne.
  • Techdirt: Google Finally Realizes It Needs To Be The Web Platform.
  • TechCrunch Labs: Our Experience Building And Launching An App On Google App Engine
  • Zdnet: Let the PaaS wars begin.
  • ReadWriteWeb: Google: Cloud Control to Major Tom.
  • ComputingAtScale: The Live Web has another heart.
  • ComputingAtScale: Parallelism: The New New Thing!.
  • EETimes: CPU designers debate multi-core future.
  • EETimes: Multicore puts screws to parallel-programming models.
  • Slashdot: Panic in Multicore Land.
  • Mashable: Google App Engine: An Early Look.
  • DeftLabs: Gazing Into The Clouds.
  • Experimenting with Google App Engine. Bret Taylor writes a blog post about the blog he wrote for AppEngine that serves the blog post. Elegant touch.
  • ReadWriteWeb: Google App Engine: History's Next Step or Monopolistic Boondoggle?.
  • GigaOM: App Engine: Competition Is Good for Everyone
  • The Blist: Batteries sold separately - "they’ve got the scalable bit and the hosting bit, but there’s a surprising lack of, well, “web” and “application” going on here."
  • Foobar: Would you use Google App Engine? - " if you think you might like to compete with Google and/or become a really large web site or business yourself, then I don't think that Google App Engine should be your first choice"
  • Silicon Valley Insider: Google's App Engine: Aiming At Facebook, Not Amazon - "Google is not trying to provide pure utility here -- they are trying to provide utility tethered to their infrastructure."
  • Niall Kennedy: Google App Engine for developers - "Overall I am quite impressed with Google App Engine and its potential to remove operations management and systems administration from my task list. I am not confident in Google App Engine as a hosting solution for any real business..."

    Click to read more ...

  • Wednesday
    Mar122008

    YouTube Architecture

    Update 3: 7 Years Of YouTube Scalability Lessons In 30 Minutes and YouTube Strategy: Adding Jitter Isn't A Bug

    Update 2: YouTube Reaches One Billion Views Per Day. That’s at least 11,574 views per second, 694,444 views per minute, and 41,666,667 views per hour. 

    Update: YouTube: The Platform. YouTube adds a new rich set of APIs in order to become your video platform leader--all for free. Upload, edit, watch, search, and comment on video from your own site without visiting YouTube. Compose your site internally from APIs because you'll need to expose them later anyway.

    YouTube grew incredibly fast, to over 100 million video views per day, with only a handful of people responsible for scaling the site. How did they manage to deliver all that video to all those users? And how have they evolved since being acquired by Google?

    Information Sources

  • Google Video

    Platform

  • Apache
  • Python
  • Linux (SuSe)
  • MySQL
  • psyco, a dynamic python->C compiler
  • lighttpd for video instead of Apache

    What's Inside?

    The Stats

  • Supports the delivery of over 100 million videos per day.
  • Founded 2/2005
  • 3/2006 30 million video views/day
  • 7/2006 100 million video views/day
  • 2 sysadmins, 2 scalability software architects
  • 2 feature developers, 2 network engineers, 1 DBA

    Recipe for handling rapid growth

    while (true) { identify_and_fix_bottlenecks(); drink(); sleep(); notice_new_bottleneck(); } This loop runs many times a day.

    Web Servers

  • NetScalar is used for load balancing and caching static content.
  • Run Apache with mod_fast_cgi.
  • Requests are routed for handling by a Python application server.
  • Application server talks to various databases and other informations sources to get all the data and formats the html page.
  • Can usually scale web tier by adding more machines.
  • The Python web code is usually NOT the bottleneck, it spends most of its time blocked on RPCs.
  • Python allows rapid flexible development and deployment. This is critical given the competition they face.
  • Usually less than 100 ms page service times.
  • Use psyco, a dynamic python->C compiler that uses a JIT compiler approach to optimize inner loops.
  • For high CPU intensive activities like encryption, they use C extensions.
  • Some pre-generated cached HTML for expensive to render blocks.
  • Row level caching in the database.
  • Fully formed Python objects are cached.
  • Some data are calculated and sent to each application so the values are cached in local memory. This is an underused strategy. The fastest cache is in your application server and it doesn't take much time to send precalculated data to all your servers. Just have an agent that watches for changes, precalculates, and sends.

    Video Serving

  • Costs include bandwidth, hardware, and power consumption.
  • Each video hosted by a mini-cluster. Each video is served by more than one machine.
  • Using a a cluster means: - More disks serving content which means more speed. - Headroom. If a machine goes down others can take over. - There are online backups.
  • Servers use the lighttpd web server for video: - Apache had too much overhead. - Uses epoll to wait on multiple fds. - Switched from single process to multiple process configuration to handle more connections.
  • Most popular content is moved to a CDN (content delivery network): - CDNs replicate content in multiple places. There's a better chance of content being closer to the user, with fewer hops, and content will run over a more friendly network. - CDN machines mostly serve out of memory because the content is so popular there's little thrashing of content into and out of memory.
  • Less popular content (1-20 views per day) uses YouTube servers in various colo sites. - There's a long tail effect. A video may have a few plays, but lots of videos are being played. Random disks blocks are being accessed. - Caching doesn't do a lot of good in this scenario, so spending money on more cache may not make sense. This is a very interesting point. If you have a long tail product caching won't always be your performance savior. - Tune RAID controller and pay attention to other lower level issues to help. - Tune memory on each machine so there's not too much and not too little.

    Serving Video Key Points

  • Keep it simple and cheap.
  • Keep a simple network path. Not too many devices between content and users. Routers, switches, and other appliances may not be able to keep up with so much load.
  • Use commodity hardware. More expensive hardware gets the more expensive everything else gets too (support contracts). You are also less likely find help on the net.
  • Use simple common tools. They use most tools build into Linux and layer on top of those.
  • Handle random seeks well (SATA, tweaks).

    Serving Thumbnails

  • Surprisingly difficult to do efficiently.
  • There are a like 4 thumbnails for each video so there are a lot more thumbnails than videos.
  • Thumbnails are hosted on just a few machines.
  • Saw problems associated with serving a lot of small objects: - Lots of disk seeks and problems with inode caches and page caches at OS level. - Ran into per directory file limit. Ext3 in particular. Moved to a more hierarchical structure. Recent improvements in the 2.6 kernel may improve Ext3 large directory handling up to 100 times, yet storing lots of files in a file system is still not a good idea. - A high number of requests/sec as web pages can display 60 thumbnails on page. - Under such high loads Apache performed badly. - Used squid (reverse proxy) in front of Apache. This worked for a while, but as load increased performance eventually decreased. Went from 300 requests/second to 20. - Tried using lighttpd but with a single threaded it stalled. Run into problems with multiprocesses mode because they would each keep a separate cache. - With so many images setting up a new machine took over 24 hours. - Rebooting machine took 6-10 hours for cache to warm up to not go to disk.
  • To solve all their problems they started using Google's BigTable, a distributed data store: - Avoids small file problem because it clumps files together. - Fast, fault tolerant. Assumes its working on a unreliable network. - Lower latency because it uses a distributed multilevel cache. This cache works across different collocation sites. - For more information on BigTable take a look at Google Architecture, GoogleTalk Architecture, and BigTable.

    Databases

  • The Early Years - Use MySQL to store meta data like users, tags, and descriptions. - Served data off a monolithic RAID 10 Volume with 10 disks. - Living off credit cards so they leased hardware. When they needed more hardware to handle load it took a few days to order and get delivered. - They went through a common evolution: single server, went to a single master with multiple read slaves, then partitioned the database, and then settled on a sharding approach. - Suffered from replica lag. The master is multi-threaded and runs on a large machine so it can handle a lot of work. Slaves are single threaded and usually run on lesser machines and replication is asynchronous, so the slaves can lag significantly behind the master. - Updates cause cache misses which goes to disk where slow I/O causes slow replication. - Using a replicating architecture you need to spend a lot of money for incremental bits of write performance. - One of their solutions was prioritize traffic by splitting the data into two clusters: a video watch pool and a general cluster. The idea is that people want to watch video so that function should get the most resources. The social networking features of YouTube are less important so they can be routed to a less capable cluster.
  • The later years: - Went to database partitioning. - Split into shards with users assigned to different shards. - Spreads writes and reads. - Much better cache locality which means less IO. - Resulted in a 30% hardware reduction. - Reduced replica lag to 0. - Can now scale database almost arbitrarily.

    Data Center Strategy

  • Used manage hosting providers at first. Living off credit cards so it was the only way.
  • Managed hosting can't scale with you. You can't control hardware or make favorable networking agreements.
  • So they went to a colocation arrangement. Now they can customize everything and negotiate their own contracts.
  • Use 5 or 6 data centers plus the CDN.
  • Videos come out of any data center. Not closest match or anything. If a video is popular enough it will move into the CDN.
  • Video bandwidth dependent, not really latency dependent. Can come from any colo.
  • For images latency matters, especially when you have 60 images on a page.
  • Images are replicated to different data centers using BigTable. Code looks at different metrics to know who is closest.

    Lessons Learned

  • Stall for time. Creative and risky tricks can help you cope in the short term while you work out longer term solutions.
  • Prioritize. Know what's essential to your service and prioritize your resources and efforts around those priorities.
  • Pick your battles. Don't be afraid to outsource some essential services. YouTube uses a CDN to distribute their most popular content. Creating their own network would have taken too long and cost too much. You may have similar opportunities in your system. Take a look at Software as a Service for more ideas.
  • Keep it simple! Simplicity allows you to rearchitect more quickly so you can respond to problems. It's true that nobody really knows what simplicity is, but if you aren't afraid to make changes then that's a good sign simplicity is happening.
  • Shard. Sharding helps to isolate and constrain storage, CPU, memory, and IO. It's not just about getting more writes performance.
  • Constant iteration on bottlenecks: - Software: DB, caching - OS: disk I/O - Hardware: memory, RAID
  • You succeed as a team. Have a good cross discipline team that understands the whole system and what's underneath the system. People who can set up printers, machines, install networks, and so on. With a good team all things are possible.

    Click to read more ...

  • Monday
    Oct082007

    Lessons from Pownce - The Early Years

    Pownce is a new social messaging application competing micromessage to micromessage with the likes of Twitter and Jaiku. Still in closed beta, Pownce has generously shared some of what they've learned so far. Like going to a barrel tasting of a young wine and then tasting the same wine after some aging, I think what will be really interesting is to follow Pownce and compare the Pownce of today with the Pownce of tomorrow, after a few years spent in the barrel. What lessons lie in wait for Pownce as they grow? Site: http://www.pownce.com/

    Information Sources

  • Pownce Lessons Learned - FOWA 2007
  • Scoble on Twitter vs Pownce
  • Founder Leah Culver's Blog

    The Platform

  • Python
  • Django for the website framework
  • Amazon's S3 for file storage.
  • Adobe AIR (Adobe Integrated Runtime) for desktop application
  • Memcached
  • Available on Facebook
  • Timeplot for charts and graphs.

    The Stats

  • Developed in 4 months and went to an invite-only launch in June.
  • Began as Leah's hobby project and then it snowballed into a real horse with the addition of Digg's Daniel Burka and Kevin Rose.
  • Small 4 person team with one website developer.
  • Self funded.
  • One MySQL database.
  • Features include: - Short messaging, invites for events, links, file sharing (you can attach mp3s to messages, for example). - You can limit usage to a specific subset of friends and friends can be grouped in sets. So you can send your mp3 to a specific group of friends. - It does not have an SMS gateway, IM gateway, or an API.

    The Architecture

  • Chose Django because it had an active community, good documentation, good readability, it is open to growth, and auto generated administration.
  • Chose S3 because it minimized maintenance and was inexpensive. It has been reliable for them.
  • Chose AIR because it had a lot of good buzz, ease of development, creates a nice UI, and is cross platform.
  • Database has been the main bottleneck. Attack and fix slow queries.
  • Static pages, objects, and lists are cached using memcached.
  • Queuing is used to defer more complex work, like sending notes, until later.
  • Use pagination and a good UI to limit the amount of work performed.
  • Good indexing helped improve the performance for friend searching.
  • In a social site: - Make it easy to create and destroy relationships. - Friend relationships are the most important information to display correctly because people really care about it. - Friends in the online world have real-world effects.
  • Features are "biased" for scalability - You must get an invite from someone on already on Pownce. - Invites are limited to their data center's ability to keep up with the added load. Blindly uploading address books can bring on new users exponentially. Limiting that unnatural growth is a good idea.
  • Their feature set will expand but they aren't ready to commit to an API yet.
  • Revenue model: ads between posts.

    Lessons Learned

  • The four big lessons they've experienced so far are: - Think about technology choices. - Do a lot with a little. - Be kind to your database. - Expect anything.
  • Have a small dedicated team where people handle multiple jobs.
  • Use open source. There's lots of it, it's free, and there's a lot of good help.
  • Use your resources. Learn from website doc, use IRC, network, participate in communities and knowledge exchange.
  • Shed work off the database by making sure that complex features are really needed before implementing them.
  • Cultivate a prepared mind. Expect the unexpected and respond quickly to the inevitable problems.
  • Use version control and make backups.
  • Maintain a lot of performance related stats.
  • Don't promise users a deadline because you just might not make it.
  • Commune with your community. I especially like this one and I wish it was done more often. I hope this attitude can survive growth. - Let them know what you are working on and about new features and bug fixes. - Respond personally to individual bug creators.
  • Take a look at your framework's automatically generated queries. They might suck.
  • A sexy UI and a good buzz marketing campaign can get you a lot of users.

    Related Articles

  • Scaling Twitter: Making Twitter 10000 Percent Faster.

    Click to read more ...

  • Page 1 2