Entries in PHP (14)

Sunday
May312009

Need help on Site loading & database optimization - URGENT

Hi Friends,

I need some help in making site access fast. On an average my site has the traffic 2500 hits per day and on 16th May it had 60,000 hits. On this day site was loading very slow even it was getting time out. I also check out the processes running by using "top" command it was indicating mysql was taking too much load.

There are around 166 tables (Including PHPBB forum) in my database. All contents on site are displayed by fetching it from database. I have also added indexing to respective tables where it is required. Plain PHP/HTML coding is used.

Technology:

PHP -- 5.2
MYSQL -- 5.0
Apache -- 2.0
Linux

Following is all the server details of my site:

CPU : Single Socket Dual Core AMD Opteron 1212HE
Memory: 2GB DDR RAM
Hard Drive: 250GB SATA
Ethernet: 100Mb Primary Ethernet Card

(/var/log) # uname -a
Linux 2.6.9-67.0.15.ELsmp #1 SMP Tue Apr 22 13:50:33 EDT 2008 i686 athlon i386 GNU/Linux

kernel version:
2.6.9-67.0.15.ELsmp

(/var/log) # free -m
total used free shared buffers cached
Mem: 2026 1976 49 0 143 1474
-/+ buffers/cache: 359 1667
Swap: 1027 0 1027

RAM: 2 G

(/var/log) # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 227G 20G 196G 10% /
/dev/sda1 99M 12M 82M 13% /boot
none 1014M 0 1014M 0% /dev/shm
/dev/sda2 2.0G 196M 1.7G 11% /tmp

Disk usage: 10% used/ 196 G available.

Its an dedicated server and only 1 website is hosted.

Can anybody please suggest how can I optimize site in more appropriate manner so that it will not go down if traffic increases on site.

Thanks
Sandy

Friday
Apr102009

Facebook's Aditya giving presentation on Facebook Architecture

Facebook's engg. director aditya talks about facebook architecture. How they use mysql, php and memcache. How they have modified the above to suit their requirements.

Click to read more ...

Saturday
Apr042009

Digg Architecture

Update 4:: Introducing Digg’s IDDB Infrastructure by Joe Stump. IDDB is a way to partition both indexes (e.g. integer sequences and unique character indexes) and actual tables across multiple storage servers (MySQL and MemcacheDB are currently supported with more to follow). Update 3:: Scaling Digg and Other Web Applications. Update 2:: How Digg Works and How Digg Really Works (wear ear plugs). Brought to you straight from Digg's blog. A very succinct explanation of the major elements of the Digg architecture while tracing a request through the system. I've updated this profile with the new information. Update: Digg now receives 230 million plus page views per month and 26 million unique visitors - traffic that necessitated major internal upgrades. Traffic generated by Digg's over 22 million famously info-hungry users and 230 million page views can crash an unsuspecting website head-on into its CPU, memory, and bandwidth limits. How does Digg handle billions of requests a month? Site: http://digg.com

Information Sources

  • How Digg Works by Digg
  • How Digg.com uses the LAMP stack to scale upward
  • Digg PHP's Scalability and Performance

    Platform

  • MySQL
  • Linux
  • PHP
  • Lucene
  • Python
  • APC PHP Accelerator
  • MCache
  • Gearman - job scheduling system
  • MogileFS - open source distributed filesystem
  • Apache
  • Memcached

    The Stats

  • Started in late 2004 with a single Linux server running Apache 1.3, PHP 4, and MySQL. 4.0 using the default MyISAM storage engine
  • Over 22 million users.
  • 230 million plus page views per month
  • 26 million unique visitors per month
  • Several billion page views per month
  • None of the scaling challenges faced had anything to do with PHP. The biggest issues faced were database related.
  • Dozens of web servers.
  • Dozens of DB servers.
  • Six specialized graph database servers to run the Recommendation Engine.
  • Six to ten machines that serve files from MogileFS.

    What's Inside

  • Specialized load balancer appliances monitor the application servers, handle failover, constantly adjust the cluster according to health, balance incoming requests and caching JavaScript, CSS and images. If you don't have the fancy load balancers take a look at Linux Virtual Server and Squid as a replacement.
  • Requests are passed to the Application Server cluster. Application servers consist of: Apache+PHP, Memcached, Gearman and other daemons. They are responsible for making coordinating access to different services (DB, MogileFS, etc) and creating the response sent to the browser.
  • Uses a MySQL master-slave setup. - Four master databases are partitioned by functionality: promotion, profiles, comments, main. Many slave databases hang off each master. - Writes go to the masters and reads go to the slaves. - Transaction-heavy servers use the InnoDB storage engine. - OLAP-heavy servers use the MyISAM storage engine. - They did not notice a performance degradation moving from MySQL 4.1 to version 5. - The schema is denormalized more than "your average database design." - Sharding is used to break the database into several smaller ones.
  • Digg's usage pattern makes it easier for them to scale. Most people just view the front page and leave. Thus 98% of Digg's database accesses are reads. With this balance of operations they don't have to worry about the complex work of architecting for writes, which makes it a lot easier for them to scale.
  • They had problems with their storage system telling them writes were on disk when they really weren't. Controllers do this to improve the appearance of their performance. But what it does is leave a giant data integrity whole in failure scenarios. This is really a pretty common problem and can be hard to fix, depending on your hardware setup.
  • To lighten their database load they used the APC PHP accelerator MCache.
  • Memcached is used for caching and memcached servers seemed to be spread across their database and application servers. A specialized daemon monitors connections and kills connections that have been open too long.
  • You can configure PHP not parse and compile on each load using a combination of Apache 2’s worker threads, FastCGI, and a PHP accelerator. On a page's first load the PHP code is compiles so any subsequent page loads are very fast.
  • MogileFS, a distributed file system, serves story icons, user icons, and stores copies of each story’s source. A distributed file system spreads and replicates files across a lot of disks which supports fast and scalable file access.
  • A specialized Recommendation Engine service was built to act as their distributed graph database. Relational databases are not well structured for generating recommendations so a separate service was created. LinkedIn did something similar for their graph.

    Lessons Learned

  • The number of machines isn't as important what the pieces are and how they fit together.
  • Don't treat the database as a hammer. Recommendations didn't fit will with the relational model so they made a specialized service.
  • Tune MySQL through your database engine selection. Use InnoDB when you need transactions and MyISAM when you don't. For example, transactional tables on the master can use MyISAM for read-only slaves.
  • At some point in their growth curve they were unable to grow by adding RAM so had to grow through architecture.
  • People often complain Digg is slow. This is perhaps due to their large javascript libraries rather than their backend architecture.
  • One way they scale is by being careful of which application they deploy on their system. They are careful not to release applications which use too much CPU. Clearly Digg has a pretty standard LAMP architecture, but I thought this was an interesting point. Engineers often have a bunch of cool features they want to release, but those features can kill an infrastructure if that infrastructure doesn't grow along with the features. So push back until your system can handle the new features. This goes to capacity planning, something the Flickr emphasizes in their scaling process.
  • You have to wonder if by limiting new features to match their infrastructure might Digg lose ground to other faster moving social bookmarking services? Perhaps if the infrastructure was more easily scaled they could add features faster which would help them compete better? On the other hand, just adding features because you can doesn't make a lot of sense either.
  • The data layer is where most scaling and performance problems are to be found and these are language specific. You'll hit them using Java, PHP, Ruby, or insert your favorite language here.

    Related Articles

    * LinkedIn Architecture * Live Journal Architecture * Flickr Architecture * An Unorthodox Approach to Database Design : The Coming of the Shard
  • Ebay Architecture

    Click to read more ...

  • Thursday
    Oct302008

    Olio Web2.0 Toolkit - Evaluate Web Technologies and Tools

    How do you evaluate and decide which web technologies (and there are myriads out there) to use for your new web application, which one potentially gives you the best performance, which one will likely give you the shortest time-to-market? The Apache incubator project Olio might help. Olio is a is an open source web 2.0 toolkit to help evaluate the suitability, functionality and performance of web technologies. Olio defines an example web2.0 application (an events site somewhat like yahoo.com/upcoming) and provides three initial implementations : PHP, Java EE and RubyOnRails (ROR). The toolkit also defines ways to drive load against the application in order to measure performance. Apache Olio could be used to

    • Understand how to use various web 2.0 technologies such as AJAX, memcached, mogileFS etc. Use the code in the application to understand the subtle complexities involved and how to get around issues with these technologies.
    • Evaluate the differences in the three implementations: php, ruby and java to understand which might best work for your situation.
    • Within each implementation, evaluate different infrastructure technologies by changing the servers used (e.g: apache vs lighttpd, mysql vs postgre, ruby vs Jruby etc.)
    • Drive load against the application to evaluate the performance and scalability of the chosen platform.
    • Experiment with different algorithms (e.g. memcache locking, a different DB access API) by replacing portions of code in the application.
    Olio started it's life as the web2.0kit developed by Sun Microsystems in colloboration with U.C. Berkeley RAD Lab and was presented on Velocity2008.

    Click to read more ...

    Wednesday
    Oct152008

    Outside.in Scales Up with Engine Yard and moving from PHP to Ruby on Rails

    This article explains how Outside.in, the local social network and aggregator, scaled up its service and moved from PHP to Ruby on Rails (they moved maybe because Ruby code seemed to be more maintanable that PHP code?).

    The whole article is here on EngineYard blog.

    Click to read more ...

    Wednesday
    Apr302008

    Rather small site architecture.

    Website stats:

    Webserver: Apache 2.2 Database: MySQL 5.0 APC cache for php CMS: Drupal 6.2 (bleeding-edge version)* *Aggressive caching is ON, Page Compression ON, Block Cache ON (can't use CCS),Optimize CSS/JS ON. 2 Servers: Apache/Mysql (low-tech servers - Celeron processors, 512 MB RAM, 7200 RPM HDD) Bandwidth 10 Mb/s

    The benchmark:

    Used ab : ab -n 1000 -c 20 howwhatwho.com Server Software: Apache/2.2.3 Server Hostname: howwhatwho.com Server Port: 80 Document Path: / Document Length: 41639 bytes Concurrency Level: 20 Time taken for tests: 13.556796 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 42118000 bytes HTML transferred: 41639000 bytes Requests per second: 73.76 [#/sec] (mean) Time per request: 271.136 [ms] (mean) Time per request: 13.557 [ms] (mean, across all concurrent requests) Transfer rate: 3033.90 [Kbytes/sec] received The Apache server is also running the postifx and bind although they aren't resource intensive applications. The Cron job for drupal runs every 50 minutes, and the agreggator module is enabled and fetches more than 30 rss feeds every time. The site used to be hosted on a single Celeron machine but on peak times the CPU went up to 80 %. Question : Does anybody know a website hosted on an IBM Mainframe? :) Todd?

    Click to read more ...

    Tuesday
    Nov132007

    Flickr Architecture

    Update: Flickr hits 2 Billion photos served. That's a lot of hamburgers.

    Flickr is both my favorite bird and the web's leading photo sharing site. Flickr has an amazing challenge, they must handle a vast sea of ever expanding new content, ever increasing legions of users, and a constant stream of new features, all while providing excellent performance. How do they do it?

    Site: http://www.flickr.com

    Information Sources

  • Flickr and PHP (an early document)
  • Capacity Planning for LAMP
  • Federation at Flickr: Doing Billions of Queries a Day by Dathan Pattishall.
  • Building Scalable Web Sites by Cal Henderson from Flickr.
  • Database War Stories #3: Flickr by Tim O'Reilly
  • Cal Henderson's Talks. A lot of useful PowerPoint presentations.

    Platform

    Click to read more ...

  • Tuesday
    Oct302007

    Feedblendr Architecture - Using EC2 to Scale

    A man had a dream. His dream was to blend a bunch of RSS/Atom/RDF feeds into a single feed. The man is Beau Lebens of Feedville and like most dreamers he was a little short on coin. So he took refuge in the home of a cheap hosting provider and Beau realized his dream, creating FEEDblendr. But FEEDblendr chewed up so much CPU creating blended feeds that the cheap hosting provider ordered Beau to find another home. Where was Beau to go? He eventually found a new home in the virtual machine room of Amazon's EC2. This is the story of how Beau was finally able to create his one feeds safe within the cradle of affordable CPU cycles. Site: http://feedblendr.com/

    The Platform

  • EC2 (Fedora Core 6 Lite distro)
  • S3
  • Apache
  • PHP
  • MySQL
  • DynDNS (for round robin DNS)

    The Stats

  • Beau is a developer with some sysadmin skills, not a web server admin, so a lot of learning was involved in creating FEEDblendr.
  • FEEDblendr uses 2 EC2 instances. The same Amazon Instance (AMI) is used for both instances.
  • Over 10,000 blends have been created, containing over 45,000 source feeds.
  • Approx 30 blends created per day. Processors on the 2 instances are actually pegged pretty high (load averages at ~ 10 - 20 most of the time).

    The Architecture

  • Round robin DNS is used to load balance between instances. -The DNS is updated by hand as an instance is validited to work correctly before the DNS is updated. -Instances seem to be more stable now than they were in the past, but you must still assume they can be lost at any time and no data will be persisted between reboots.
  • The database is still hosted on an external service because EC2 does not have a decent persistent storage system.
  • The AMI is kept as minimal as possible. It is a clean instance with some auto-deployment code to load the application off of S3. This means you don't have to create new instances for every software release.
  • The deployment process is: - Software is developed on a laptop and stored in subversion. - A makefile is used to get a revision, fix permissions etc, package and push to S3. - When the AMI launches it runs a script to grab the software package from S3. - The package is unpacked and a specific script inside is executed to continue the installation process. - Configuration files for Apache, PHP, etc are updated. - Server-specific permissions, symlinks etc are fixed up. - Apache is restarted and email is sent with the IP of that machine. Then the DNS is updated by hand with the new IP address.
  • Feeds are intelligently cached independely on each instance. This is to reduce the costly polling for feeds as much as possible. S3 was tried as a common feed cache for both instances, but it was too slow. Perhaps feeds could be written to each instance so they would be cached on each machine?

    Lesson Learned

  • A low budget startup can effectively bootstrap using EC2 and S3.
  • For the budget conscious the free ZoneEdit service might work just as well as the $50/year DynDNS service (which works fine).
  • Round robin load balancing is slow and unreliable. Even with a short TTL for the DNS some systems hold on to the IP addressed for a long time, so new machines are not load balanced to.
  • Many problems exist with RSS implementations that keep feeds from being effectively blended. A lot of CPU is spent reading and blending feeds unecessarily because there's no reliable cross implementation way to tell when a feed has really changed or not.
  • It's really a big mindset change to consider that your instances can go away at any time. You have to change your architecture and design to live with this fact. But once you internalize this model, most problems can be solved.
  • EC2's poor load balancing and persistence capabilities make development and deployment a lot harder than it should be.
  • Use the AMI's ability to be passed a parameter to select which configuration to load from S3. This allows you to test different configurations without moving/deleting the current active one.
  • Create an automated test system to validate an instance as it boots. Then automatically update the DNS if the tests pass. This makes it easy create new instances and takes the slow human out of the loop.
  • Always load software from S3. The last thing you want happening is your instance loading, and for some reason not being able to contact your SVN server, and thus failing to load properly. Putting it in S3 virtually eliminates the chances of this occurring, because it's on the same network.

    Related Articles

  • What is a 'River of News' style aggregator?
  • Build an Infinitely Scalable Infrastructure for $100 Using Amazon Services

    Click to read more ...

  • Tuesday
    Oct022007

    Secrets to Fotolog's Scaling Success

    Fotolog, a social blogging site centered around photos, grew from about 300 thousand users in 2004 to over 11 million users in 2007. Though they initially experienced the inevitable pains of rapid growth, they overcame their problems and now manage over 300 million photos and 800,000 new photos are added each day. Generating all that fabulous content are 20 million unique monthly visitors and a volunteer army of 30,000 new users each day. They did so well a very impressed suitor bought them out for a cool $90 million. That's scale meets success by anyone standards. How did they do it? Site: http://www.fotolog.com/

    Information Sources

  • Scaling the World's Largest Photo Blogging Community
  • Congrats to Fotolog on $90mm sale to Hi-Media
  • Fotolog overtaking Flickr?
  • Fotolog Hits 11 Million Members and 300 Million Photos Posted
  • Site of the Week: Fotolog.com by PC Magazine
  • CEO John Borthwick's Blog.
  • DBA Frank Mash's Blog
  • Fotolog, lessons learnt by John Borthwick .

    The Platform

  • Java
  • PHP
  • Sun
  • Solaris 10
  • MySQL
  • Apache
  • Hibernate
  • Memcached
  • 3PAR (a simple, efficient and scalable tiered-storage array for utility computing)
  • IBRIX (a single namespace parallel file system, a scalable volume manager, high availability feature)
  • StrongMail
  • CDN: Akamai/Panther

    The Stats

  • Started in 2002. In 2004 they had around 300k or 400k members, 3 employees, no scalable infrastructure, and no revenue model.
  • Due to the rapid growth the site had frequent technical problems and 2005 they had to limit new free members to 1,000 a day.
  • In 2007 they had over 11 million users and were sold for $90 million to Hi-Media.
  • Members are from over 200 countries with a majority in South America. Over 20% of page views are from Europe. They rejected a US centric strategy, developing a global and engaged audience.
  • Generates over 3.5 billion page views and receives over 20 million unique visitors each month and has earned a top 20 Alexa ranking.
  • Manages over 300 Million photos and over 500,000 photos are uploaded each day.
  • Over 30,000 new members are added each day and attracts more than 4.6 million daily users. Expanded with no marketing or member incentives.
  • Over 500 user-generated communities.
  • 20% of member visit the site daily and spend an average of 24 minutes.
  • 32 MySQL servers and a 30 memcached server cluster.

    The Architecture

  • Site originally written in PHP. - Their new "Fotolog memberpage" feature is written in Java with significant performance improvement. Page is cleaner with an improved response time. - They are now serving the site on less than half the boxes they were using. - Daily registrations are up over 35% given the improved performance and a requirement to register to post a guest book message. - The new code base allows them to innovate much more on the member experience.
  • They have surpassed Flickr in popularity being a firmly Web 1.0 application. - There are no tags, no APIs, no JavaScript widgets, no Ajax. - They have a Spanish language option which extends the site to a broad user base. - They use very little text. It's mostly visual so it usable by a broad range of users. - Their interface is customizable and many people like to express their individual identities. - Their unique visitors are 1MM less than Yahoo's, yet the total minutes on the site are twice that of Yahoo and pages are 3x.
  • Revenue model: - Gold camera member for about $5/month means you can upload 6 photos a day instead of 1, have 200 comments per photo instead of 20, a custom title image for your profile, a mini-thumbnail of your most recent photo displayed next to your name in guest books, plus the possibility of having your photo featured on the front page.. - Adsense. Revenue lift from Google is trending up approximately 15% given additional contextual data from guest books. - Will move to a peer-to-peer advertising among their members. - Members will have the ability to buy and sell real and virtual items using a micro-payment service.
  • They have a one-post-per-day rule where users can only post one photo a day. Rather than inhibit growth this rules ensures quality and generates exceptional usage by increasing the chance of a photo being seen and by attracting positive comments. Where as people usually run out of things to say on a blog, people can always find a picture to take, upload, and talk about.
  • Only photos less than 2,000 kb in size can be uploaded. These are automatically resized to a 500x500 format. Pages look cleaner and load faster.
  • Model is browsing over searching. Opportunistic serendipitous treasure hunting is encouraged.
  • Friends are added automatically without needing permission. This generates an audience for your photos.
  • Supports a browse by groups feature, which have categories like "Colors" and "Emotions."
  • The site is intentionally simple. - They have resisted the temptation to add feature after feature. Instead their vision is to offer a handful of features, similar to Craig's list, the focus being on content and the conversations. - Pages need to be social. - Pages need to include not only your images, but also images from across the network, providing a visual navigation that today drives much of the time their members spend on the site, a self formed, organic distribution system, letting members see and be seen. - Complementing this social network of images are comments and guest book entries — making the experience one where media intersects with communications, day in day out, millions of images collide with billions of conversations.
  • Photobucket vs Fotolog - Photobucket stores image-based media, then distributes it to your page on social networking sites such as Myspace, Bebo, Piczo, Friendster, etc. - Fotolog is a destination. - The first generation of social-networking sites stressed self-publishing over connections (from Geocities, to Tripod to Blogger). The next generation focused mostly on connections (sixdegrees, and friendster are the classic examples here — tools to gather friends and connections, as social capital accrues in theory to the people with the most connections). The third and current generation of sites blends media with connections — each with a different emphasis.
  • Backup: Sun 6540 disk array
  • Their 32 SQL servers are divided into four clusters - user, GB (guest book), PH (photos), FF (friends and favorites lists) - Uses non-persistent connections. - Connection pooling on the Java side. - InnoDB - Partitioning is handled by the application layer.
  • Each cluster: - Is fronted by a set of application servers. - Divided into a set of shards. - Each shard has MySQL write-only master-master configuration feeding a few read-only slaves. - Application servers send their read requests to the slaves and their write requests to the masters. - Data are assigned to shared based on some sort of cluster specific partioning key. Naive partitioning algorithms can lead to very uneven shard loads, you want a more balanced load on each shard.
  • MySQL is used to store image metadata only. This seems pretty standard. Almost nobody seems to store important blobs in the database because it slows down database operations.
  • Photo storage uses 3PAR and IBRIX. A CDN is used for hot content.
  • The virtual storage system, though expensive, has worked very well.
  • As more selects are used lock contention for auto-incremented keys grows.
  • Through database optimizations they've been able to grow from 4 million members to 11 million members on the same 32 database servers. This is also do to the efficiency of MySQL running on Solaris 10, and increasing the memcache cluster, porting to Java, and increasing RAM.
  • Happy with memcached. - Created a distributed cluster of 50 memcached servers with a total cache size of approximately 150 gigabytes, supporting around 4 billion page views/month. Peak load times dropped from 10 seconds to 2 seconds. - Quote from CTO:
    I have a new memcached user to add to your list: we here at Fotolog, the world's largest photo blogging community, now use it and we love it. I just rolled our first code to use it into production today and it has been a lifesaver. I can't wait to start using it in places where we had been relying on Berkeley databases to offload some database work. We are not some wimpy million page a day site, either. Fotolog is a billion+ pages/month site (35 to 40 million views/day is pretty typical for us). We had recently overcome some significant DB-related performance issues which allowed our site traffic to explode, and it started to bog down again under the heavy traffic load (getting back up towards 10 seconds for a page to load sometimes during the peak periods). The servers were churning away each recreating a list every time when it could easily be shared in the same form for at least 5 or 10 minutes. So we introduced memcache, creating a distributed 30-server cluster with 4 gigs available in total and made a very minor code mod to use memcache, and our peak period load times dropped back down to the 2 second or so range. It has allowed for continued growth and incredible efficiency. I can't say when I've ever been so pleased with something that worked so simply."

    Lessons Learned

  • Popularity is driven by a base of active users, not a rich set of cool features.
  • The web is global and its tail is very long. By courting users outside the US with language and culturally specific design you can compete with the big boys. Some the hardest competition for Google, Yahoo, etc comes from local startups with an ear to what the locals want.
  • If you want to get a lot of buzz then do what ever alpha geeks want you to do. If you want a lot of happy users do what they want you to do.
  • Constraints in web sites can, like in poetry, make something unexpectedly better. The rule that users are only allowed to post one photo per day creates an environment where people comment more on each others photos which creates a more engaged community. Who knew?
  • Protect your website with limits. Limit the size of pictures, comments, etc so your resource usage doesn't grow outrageously.
  • Have a vision. Have a strong sense of what your site is supposed to be and why, then use that vision to decide what you should build and how you should build it. Their vision of social site built around daily photographs led to a very different site than one where your goal is to store all your photos.
  • Revenue generation features can be added without destroying the integrity of your site. I really like how they give people a reasonable set of features for free and then charge for the resources they need to have more. Those features also serve to extend and reinforce the social vision of their site. It will be interesting to see how their new monetization strategies play out.
  • Don't be afraid to scale up and out. By adding more cache, more RAM, more CPUs, and more efficient CPUs you handle dramatically more load with the same number of machines. And that's a good thing from a datacenter space and power POV.
  • Making MySQL perform: - Find the source of the problem. - Mature systems are mostly disk bound. - The query cache may be hurting you. - Add RAM to help dodge the bullet. - Stripe your disks. - Restructure tables for optimal performance. - Use libumem.so to find memory leaks.
  • Things to remember: - Know the problem - Know your application - Know your storage engine - Know your requirements - Know your budget - Use all this information to decide what parts of your system really require the investment of time, money, and testing to be highly available.

    Related Articles

  • Flickr Architecture
  • An Unorthodox Approach to Database Design : The Coming of the Shard

    Click to read more ...

  • Saturday
    Sep082007

    Making the case for PHP at Yahoo! (Oct 2002)

    This presentation by Michael Radwin describes why Yahoo! had standardized on PHP going forward. It describes how after reviewing all the web technologies including their own internal ones, PHP was choosen. It shows that not only technical reasons , but also business and development processes were taken into account.

    Click to read more ...