Entries in General Discussion (161)

Thursday
Apr232009

Which Key value pair database to be used

My Table has 2 columsn .Column1 is id,Column2 contains information given by user about item in Column1 .User can give 3 types of information about item.I separate the opinion of single user by comma,and opinion of another user by ;. Example- 23-34,us,56;78,in,78 I need to calculate opinions of all users very fast.My idea is to have index on key so the searching would be very fast.Currently i m using mysql .My problem is that maximum column size is below my requirement .If any overflow occurs i make new row with same id and insert data into new row. Practically I would have around maximum 5-10 for each row. I think if there is any database which removes this application code. I just learn about key value pair database which is exactly i needed . But which doesn't put constraint(i mean much better than RDMS on column size. This application is not in production.

Click to read more ...

Wednesday
Apr152009

Implementing large scale web analytics

Does anyone know of any articles or papers that discuss the nuts and bolts of how web analytics is implemented at organizations with large volumes of web traffic and a critcal business need to analyze that data - e.g. places like Amazon.com, eBay, and Google? Just as a fun project I'm planning to build my own web log analysis app that can effectively index and query large volumes of web log data (i.e. TB range). But first I'd like to learn more about how it's done in the organizations whose lifeblood depends on this stuff. Even just a high level architectural overview of their approaches would be nice to have.

Click to read more ...

Wednesday
Apr152009

Using HTTP cache headers effectively

Hi, Some time ago , martin fowler bloged about how HTTP cache headers can be very effectively used in web site design. http://www.martinfowler.com/bliki/SegmentationByFreshness.html How actively HTTP cache headers are considered in web site design? I think it is a great tool to reduce lot of load on server and should be considered before designing any complex caching strategy. Thoughts? Thanks, Unmesh

Click to read more ...

Monday
Apr132009

Benchmark for keeping data in browser in AJAX projects

Hi, We are using AJAX and see a lot of opportunity to keep session state on client browser with javascript objects. Is there any benchmark about how much data you can generally keep in javascript objects in browser? Thanks, Unmesh

Click to read more ...

Friday
Apr102009

counting # of views, calculating most/least viewed

I'm seeking for a design pattern or advice or directions. I need to count views/downloads of a set of resources, let them to be identified by their respective URLs. This is not a big problem. I also need to keep a list of viewed/downloaded resources in the last X days. This list needs to be updated every now and then to reflect real last X days of usage. So resources that were requested prior to X days get evicted from it. So it's sort of a black box, you feed messages (download request) in and it gives you that list of URLs with counters on the other end. How would you go about designing it?

Click to read more ...

Wednesday
Apr082009

N+1+caching is ok?

Hibernate and iBATIS and other similar tools have documentation with recommendations for avoiding the "N+1 select" problem. The problem being that if you wanted to retrieve a set of widgets from a table, one query would be used to to retrieve all the ids of the matching widgets (select widget_id from widget where ...) and then for each id, another select is used to retrieve the details of that widget (select * from widget where widget_id = ?). If you have 100 widgets, it requires 101 queries to get the details of them all. I can see why this is bad, but what if you're doing entity caching? i.e. If you run the first query to get your list of ids, and then for each widget you retrive it from the cache. Surely in that case, N+1(+caching) is good? Assuming of course that there is a high probability of all of the matching entities being in the cache. I may be asking a daft question here - one whose answer is obviously implied by the large scalable mechanisms for storing data that are in use these days.

Click to read more ...

Monday
Apr062009

How do you monitor the performance of your cluster?

I had posted a note the other day about collectl and its ganglia interface but perhaps I wasn't provocative enough to get any responses so let me ask it a different way, specifically how do people monitor their clusters and more importantly how often? Do you monitor to get a general sense of what the system is doing OR do you monitor with the expectation that when something goes wrong you'll have enough data to diagnose the problem? Or both? I suspect both... Many cluster-based monitoring tools tend to have a data collection daemon running on each target node which periodically sends data to some central management station. That machine typically writes the data to some database from which it can then extract historical plots. Some even put up graphics in real-time. From my experience working with large clusters - and I'm talking either many hundreds or even 1000s of nodes, most have to limit both the amount of data they manage centrally as well as the frequency that they collect it, otherwise they'll overwhelm their management station because most DBs can't write hundreds of counters many times/minute from thousands of nodes. As a related example, how many of you run sar at the default monitoring interval of 10 minutes? Do you really think you're getting useful information? What happens if you have a 2 minutes burst of 100% network load and you're idle the other 8 minutes? Sar will happily tell you the network load was 20% and you'll never know your network is tanking. The point of all this is I do think there's a place for central monitoring, though I'm personally not a fan because of the inaccuracy of infrequent data samples, but I also appreciate some data is better than none, as long as you realize the inherent accuracy problems. And that's where collectl comes in and my previous comment about ganglia. When I wrote collectl my overarching design goal, from which I haven't wavered, was to provide highly accurate local data with minimal overhead so you will take samples in the 1-10 second range without fear of impacting the rest of the system. You can literally sample just about everything going on every 10 seconds and use <0.1% of the CPU. If you're willing to give up a few more tenths of a percent you can even monitor processes and slab activity, though you should only sample them at a 60 second frequency because it IS expensive to monitor them. However I also realize this doesn't do any good if do have 1Ks of machine you want to watch and so that's where the socket interface comes in over which collectl can send data to a central manager at that same frequency OR if you prefer have is send its remote data at a different rate, giving you the best of both worlds. Collectl can provide it's data to a central management station while at the same time providing local logging for accuracy, which will let you do a deep dive into the data if a problem does arrive for which there is not enough data stored centrally. My point about the ganglia interface was my response to the fact that a lot of of people running large (as well as smaller) clusters do use ganglia but like most central monitoring stations have to give up the accuracy of finer-grained data and I was just wondering if anyone looking at this forum use ganglia and if they might be interested in trying out the collectl interface to it. -mark

Click to read more ...

Monday
Apr062009

A picture is realy worth a thousand word, and also a window in time...

Photograpic picture to me is window, an address to that specific moment what do your think about that?

Click to read more ...

Friday
Apr032009

Collectl interface to Ganglia - any interest?

It's been awhile since I've said anything about collectl and I wanted to let this group know I'm currently working on an interface to ganglia since I've seen a variety of posts ranging from how much data to log and where to log it as well as which tools/mechanism to use for logging. From my perspective there are essentially 2 camps on the monitoring front - one says to have distributed agents all sending their data to a central point, but don't send too much or too often. The other camp (which is the one I'm in) says do it all locally with a highly efficient data collector, because you need a lot of data (I also read a post in here about logging everything) and you can't possibly monitors 100s or 1Ks of nodes remotely at the granularity necessary to get anything meaningful. Enter collectl and its evolving interface for ganglia. This will allow you to log lots of detailed data on local nodes at the usual 10 sec interval (or more frequent if you prefer) at about 0.1% system overhead while sending a subset at a lower rate to the ganglia gmonds. This would give you the best of both worlds but I don't know if people are too married to the centralized concept to try something different. I don't know how many people who follow this forum have actually tried it, I know at least a few of you have, but to learn more just go to http://collectl.sourceforge.net/ and look at some of documentation or just download the rpm and type 'collectl'. -mark

Click to read more ...

Monday
Mar162009

Cisco and Sun to Compete for Unified Computing?

A recent InfoWorld article claims that "With Cisco expected to enter the blade market and Sun expected to offer networking equipment, things could get interesting awfully fast." How does this effect your infrastructure strategy and decisions? Would you consider to build scalable web applications on the Cisco Unified Computing System? Or would you consider to build a router out of a server with the use of OpenSolaris and Project Crossbow as the article suggests? Will any of these initiatives change the way we build scalable web infrastructure or are these just attempts to sale these systems? What do you think?

Click to read more ...