XP Service Pack 2 (SP2) Screenshots

I did download the latest service pack for Windows XP, SP2 yesterday. First of all its pretty big, the ISO image was 475 mb so this is something you don’t want to download at a slow connection. Microsoft has said that you should be able to order the CD and get it shipped to you (its free). I was able to install the service pack on one of my test machines without any problems and am now in the process of installing it on a few other XP machines I have running.

A few words of caution before you do anything:

  • As always, make sure you backup anything that you cannot live without incase something goes wrong.
  • Close all applications including any spyware-watchers you might have such as Spybot
  • Ensure you have enough free space
  • If you have a NAS I would recommend disconnecting it. I have a Ximeta and the Service Pack somehow automatically selected that drive as its temporary store which was quite slow. I did not check if there is any optional parameter to pass the setup on which drive to use. I also think it automatically picks the drive with the most amount of free space which in my case was on the NAS.
  • Do let it create the uninstall information, even though it takes more space if something goes wrong this is what would be used to roll back.

Ok, I have the screen shots of the installation and after that I have shots of the new security features.

  1. Welcome Screen:

  2. Step 2:

     
  3. Read-Me Part 1:
  4. Read-Me Part 2:
  5. Extracting Files

  6. Setup Wizard

  7. License Agreement

  8. Uninstall Folder

  9. Updating the System

  10. Performing Inventory

  11. Estimating Space

  12. Backing up Files

  13. Installing Files Part 1

  14. Installing Files Part 2

  15. Updating the Registry

  16. Restarting Processes

  17. Performing Cleanup

  18. Completing


Here are more screen shots of the new features that are part of SP2, I have grouped them by features.

Control Panel – Below are the new icons that now show up in control panel.


Windows Firewall:

  1. General

  2. Exceptions to the Firewall

  3. Advanced

  4. Firewall Advanced Settings

  5. ICMP Setting

  6. Connection Settings

  7. Log Setting

  8. Security Alert


Wireless Settings:

  1. Choosing a Wireless Network

  2. Wireless Connection

  3. New Wireless Icon:


Miscellaneous – You will find screen shots of the File Security, Virus Protection and the new Win Security Center.

  1. File Download

  2. File Open

  3. Popup Blocker

  4. Virus Protection
  5. Win Security Center

  6. Security Essentials

  7. Automatic Updates

  8. Security Alert Settings

  9. Security Alert Icon Task –
  10. Win About

So, any issues? Well none big so far with the exception that I do have Norton Antivirus 2004 running but the Security Settings does not recognise that. Though I can manually configure it to ignore that but ideally it should be able to pick this up? Not sure if this is an existing bug, but this is a pretty common application and am a bit surprised that Microsoft could not pick this up? Has anyone experienced any problems?

MSN Web Messenger?

Microsoft is Beta testing the Web version of MSN Messenger and about time I think since Yahoo has had the web version of their messenger service for a while now. This is of course useful if you are one a shared computer or not on your own computer and cannot (or do not) want to install the MSN Messenger.

This is supposed to also work in Netscape 7.0 or later, or Mozilla 1.6 or later, in  addition to IE, though I have not tried it in other than IE.

Exceptional Condition Handling in SQL Server 2005

This is my second part of the SQL Server 2005 posts; you can read the first part on Hosting the .NET runtime in SQL Server.

In the CLR certain conditions such as out of memory, stack overflow, etc can bring down an app domain (or process), this cannot be allowed in SQL Server 2005 when latter is acting has a host (for the CLR) as it will affect reliability and performance – couple of the key goals for SQL Server. Similarly unconditionally stopping a thread (e.g. via Thread.Abort) can potentially leave some resources in a “hung” state.

So how do we handle this? Well other hosts such as ASP.NET recycle the app domain in such situations which is OK because a web application is disconnected and not “long running”. On the other hand SQL Server can have long running batch jobs and rolling all that information back in mot cases is not an option. Our of memory conditions are pretty hard to handle even if you leave a safety buffer for memory. Since SQL Server manages its own memory and tends to use all available memory to maximise throughput this puts it in an even more difficult position. As a result of you increase the “safety net” to handle out of memory conditions you also increase the chances or getting in those situations.

The good news is that .NET 2.0 (which ships with Whidbey) handles these situations better than ver. 1.x and it notifies SQL Server about repercussions of failing each memory request. It would also facilitate running the GC more aggressively, waiting for existing procedural code to finish before starting to execute newer code and also aborting running threads if needed. In addition, at the CLR level there is also a failure escalation policy that would allow SQL Server to deal with those situations. Using this SQL Server among other things can either abort the thread causing the exception of unload the whole app domain.

If any resources fail then the CLR will unwind the whole stack and if there are any locks on them then unload the whole app domain as locks indicate there is some shared state to synchronise and most likely would not be consistent in that state. If the CLR encounters a stack overflow that is not caught by the application then it will unwind the stack and get rid of that thread.

Going back to the main goals, SQL Server will maintain transactional semantics. If an app domain is recycled then the main reason is probably due to reliability (even if at the expense of performance). As a note, all the class libraries part from the BCL that SQL Server will load have been ensured to clean up memory and other resources after a thread is aborted or an app domain is unloaded.

I will post on security and app domains at a later time.

Common Sense ASP.NET Tips

Those who are unfortunate enough to know me, know I keep saying Common Sense is not very Common (and how true it is). VS Mag has an article that list out some best practises for ASP.NET though they do it from a performance perspective, I feel everyone should do these irrespective what their perf. requirements are:

  1. Use Page.IsPostback – There is a lot of stuff one does when the page loads the first-time, no need to keep doing it again and again.
  2. Use Stored Procedure – Need I explain why? This is a no brainer!
  3. Use HttpServerUtility.Transfer() – This is better than the Request() as there is no client round trip.
  4. Don’t overdo a good thing – I like this one.
    1. Save ViewState only when necessary – now why din’t I think of this before? *grin*.
    • Don’t Rely on Exceptions – They are expensive! Make sure it is an exception and not you being lazy. You can read my article on it if you wish.
    • Restrict use of Session state – Switch it off when not using it – this is on by default and as all good things go its not free.
  5. Limit server controls – Server controls also have a pretty decent overhead as they require server-side support and are quite chatty between your client and the server. You should use a HTML control wherever possible.
  6. JIT you apps – This one again is a no brainer. This is easier to do in Whidbey btw.
  7. Use Caching – Either use your own or ASP.NET’s caching, but remember not to put items that expire very quickly in the cache – there is a fine line and you might want to do some stress and load testing to see what balance works for you.

Hosting the .NET Runtime in SQL Server 2005

I finally got around to trying out SQL Server 2005 (a.k.a. Yukon) and reading up a little on how it operates under the covers. I had earlier discussed SQL Server Express. This is my first of series of posts where I will be highlighting some of the new things a developer can do in SQL Server 2005 from a .NET perspective, since there are many DBA’s who live and breathe SQL Server I will leave all the database and T-SQL specific stuff to them.

What is a runtime host? Basically any process that loads the .NET runtime and runs code in that managed environment is a runtime host. The most common host is the Windows shell – when you double-click on a .net application, the host (i.e. the Win shell in this case), loads the runtime in memory and then loads the requested assembly. The host loads the runtime by calling a shim DLL called mscoree.dll whose only purpose in life is to load the runtime. This exposes two API’s called ICorRuntimeHost or CorBindToRuntimeEx. Till Whidbey the only parameters that can be passed to either of the API’s are:

  • Server or workstation behaviour
  • CLR Version
  • GC Behaviour
  • Enable sharing of JIT code across AppDomains

Till SQL Server 2005 there were two runtime hosts for the .NET namely ASP.NET worker process and IE each with their own priorities of the framework. When adding the CLR, to SQL Server the three main goals in ascending order were:

  1. Security
  2. Reliability
  3. Performance

Its not surprising to see Security and Reliability a higher priority than performance. You don’t want your mission critical application to run in a non-secure environment that can allow for malicious code and allow hackers to get in. Also critical applications such as databases are quite often needed to sustain the five 9’s of requirement (99.999 % uptime) especially in the Enterprise environment. Lastly performance is important (hence it is one of the goals) as you don’t want to wait forever to get your result back, but that performance gain is not at the cost of a secure of reliable system.

SQL Server is a specialised host and not a “simple bootstrap mechanism”. To ensure the SQL Server goals some changes had to be made to the CLR which are incorporated in ver 2.0 (Whidbey). To allow the “new” hosts such as SQL Server to have hooks into the CLR’s resource allocation and management the .NET 2.0 hosts can use ICLRRuntimeHost instead of ICorRuntimeHost. The host then can call the SetHostControl() method which in turn delegates to GetHostManager() for things such as thread management, etc. Here is how it would look like:


Hosting the CLR

Resource Management – SQL Server lazy loads the clr, so if you never use it then it is never loaded. Unlike the “regular” CLR hosts where the CLR itself manages the resources, SQL Server manages its own thread scheduling, synchronisation, locking and memory allocation. This is done by layering the clr’s mechanisms in top of SQL Server’s mechanisms. SQL Server uses its own memory allocation scheme, managing “real memory” rather than virtual memory. This allows it to optimise memory, balancing between data and index buffers, query caches, etc. SQL Server can do a better job if is manages all of the memory in its process.

SQL Server also uses its own thread management putting threads to sleep when not running. The is also called user mode scheduler. SQL Server internally uses cooperative thread scheduling to minimise thread context switches, as opposed to preemptive thread scheduling used by the CLR. This means that in SQL Server a thread has to voluntarily give up control as opposed to the CLR where the processes takes over control after a certain time-slice.

The hosting API’s in .NET 2.0 allow a runtime host to either control of “have a say in” resource allocation. The APIs manage units of work (called Tasks), that can be assigned to a thread (or fibre). The SQL scheduler manages blocking points, and hooks PInvoke and interop calls out of the runtime to control switching the scheduling mode. This allows SQL Server to supply a host memory allocator, to be notified of low memory conditions at the OS level and to fail memory allocations if needed. SQL Server can also use the hosting API to control I/O completion ports.

I’ll be discussing Exception Management, Code Loading and Security in this new host in later posts. Till then, if you want to learn in more details the either get the book mentioned earlier in this post or keep a lookout here. Also if you plan on installing the newly released Beta 2 then check out Bob’s post on some observations.

Google – Upping the ante? Believing the Hype?

If you live (or ever lived) in the San Francisco Bay Area (also know as the Silicon Valley), you would know that boosterism has a long part of the valley’s culture as much as Technology and Money is. This Sunday in the paper there was an excellent perspective on with the upcoming Google’s IPO, Silicon Valley’s hype machine is in overdrive, raising sales for potential investors. Below is the article written by Jonathan Weber for Mercury News, hopefully you would also enjoy the read and let me know what you think on the issue.

For months now, it’s been hard to avoid hearing about Google. Not just about Google’s wildly popular Web-searching technology, or the upcoming IPO that bucks Wall Street conventions, or the charming young founders and their righteous motto, “Don’t Be Evil.” But also about Google as a symbol of Silicon Valley’s resilience, Google as the spark that could ignite yet another technology boom.

The glowing media coverage is a sure sign that at least one part of the valley’s modus operandi survived the tech downturn in fine shape: the hype machine. Run by savvy financiers, clever marketing executives, story-hungry journalists and quote-producing pundits, the hype machine functions as a cheerleading squad that helps generate a frenzy of, yes, irrational exuberance. Over-the-top promotion is now as much a part of the fabric of the valley as technology itself — and yet, as Google is all too likely to demonstrate, it’s also a dangerous phenomenon that creates expectations that are almost always left unfulfilled.

With news last week that Google’s IPO is imminent, the hype machine is in overdrive.

Hype has its roots in the Janus-faced nature of Silicon Valley ideology. The valley is a place of great idealism, driven by the belief that innovation and great engineering can make the world a better place. Yet it is also a place of dispiriting greed, motivated mainly by the prospect of the “big score.”

When these two threads come together — the Google founders insist their company is all about doing the right thing, even as they prepare to pocket more than $100 million apiece in the IPO — it’s hard for even the most sober-minded observers to avoid being swept up in the enthusiasm. And the fact that so many in the valley have an interest in the success of new technology companies helps turn people into believers.

The ideological foundation of hype — engineering idealism combined with entrepreneurial capitalism — is in some ways simply an expression of America’s secular religion. In this age of fundamentalism and a president who values theology over science, it’s sometimes hard to remember that America was born of the Enlightenment belief in better living through technological progress.

Silicon Valley is what it is today, both literally and symbolically, because its denizens have learned how to tap the dimension of the American psyche that worships both innovation and the wealth it can bring.

In Silicon Valley’s early days, selling the valley was not entirely a conscious process: Bill Hewlett and David Packard were not exactly self-promoters. Still, as early as the late 1960s, the idea that a bunch of clever engineers and freewheeling entrepreneurs were turning California orchards into citadels of world-beating technology was beginning to capture the public imagination.

Apple of valley’s eye

But it was Apple Computer, with its appealing whiz kids and a machine that would bring computers to the masses, that ushered in the notion of Silicon Valley as a story. With Apple, there were magazine covers, TV appearances and a lionization of the young entrepreneurs. There was PR guru Regis McKenna, honing techniques that would become staples for a generation of publicists: the strategic leak (give an exclusive interview in the hopes of getting a long story) and the “embargoed” product review (you can see the computer early but only if you promise not to report about it until we say you can).

And most important, with Apple came a realisation on the part of Wall Street that there was money to be made in selling the promise of technology to the people. The December 1980 Apple offering was co-managed by Morgan Stanley – the first IPO deal in many years, noted the New York Times, for “a prestige house that deals only with such blue-chip clients as American Telephone and Telegraph, J.P. Morgan and Standard Oil.”

The hype machine really found its audience in the 1980s, when large numbers of individual investors moved into the stock market. By the early 1990s, the business pages were the biggest growth areas in newspapers, and the tech beat – long a dreaded backwater – had become sexy. Old-line technology trade publications were joined by a rash of new titles aimed at the burgeoning audience of tech enthusiasts.

While some might say it’s the media themselves that are responsible for hyping the latest and greatest from Silicon Valley, the reality is much more nuanced. Journalists and news organisations certainly reap psychic and financial rewards from being first – or best – in getting information to their audiences. When it comes to covering business, the press tends to get excited about transactions with multi-billion-dollar figures attached. Quotable and photogenic CEOs – think Steve Jobs, or Marc Andreessen – make good copy, too.

But the media are as much a mirror as anything else, and they reflect the careful effort – and considerable amount of money – that companies often spend in hopes of establishing their importance.

When Jobs’ second company, the ill-fated Next Computer, was ready to roll out its product in the late 1980s, it rented Davies Symphony Hall in San Francisco as part of its effort to ensure that the machine was greeted with sufficient grandeur. A few years later, start-ups General Magic and 3DO waged carefully orchestrated public-relations campaigns, hinting that they had revolutionary technology behind the curtain but saying little about it – until they were able to reveal the much-anticipated details via front-page stories in the Wall Street Journal. Both companies later went public, but ultimately failed.

Highs of dot-com era

Hype reached its apogee during the dot-com era, when no claim about the revolutionary importance of a company or product seemed too extravagant. An outfit like Webvan could promise to change fundamentally the way people buy their groceries. Sporting the right backers, it could translate that promise into hundreds of millions of dollars in investment, which in turn gave credibility to its claims and enabled it to generate the favourable publicity that would help raise even more money.

Even small-time dot-coms readily spent tens of thousands of dollars per month for public-relations agencies that might help them get their stories heard — and thus attract the money, the talent and the business partners they would need to succeed. (Unfortunately, the hype couldn’t deliver the one thing every business really needs: customers. Webvan went bankrupt.)

As editor in chief of the now-defunct Industry Standard – a print and online publication launched in 1998 to cover what we called the “Internet economy” – I learned from experience how hype can generate its own momentum. We sought from the beginning to take a tough, critical stance toward the inflating Internet bubble, and to point out that expectations were getting way ahead of themselves.

Even so, the Standard was also part of the hype. The very fact that we had a lot of advertising and a large staff reporting on the Internet economy validated the importance of the phenomenon, no matter what we were actually saying in the stories. Our weekly staff beer bashes grew into elaborate corporate-sponsored parties. At our conferences, companies paid as much as $150,000 for the privilege of wining and dining the assembled executives and, in one case, entertaining them with a private fireworks display.

While the ugly end of the dot-com boom was an object lesson in the dangers of hype, the Google phenomenon shows that surprisingly little has changed. Google is ripe for hype. It’s a highly profitable paragon of engineering excellence whose product is used by tens of millions of people. Its founders, one an immigrant no less, are in their very early 30s. It has two of the shrewdest VCs in the valley behind it: John Doerr of Kleiner Perkins and Mike Moritz of Sequoia Capital. And it’s managed to infuse even the IPO itself with higher principles by insisting on an auction format that diminishes the role of investment bankers.

Self-conscious idealism

Google would say that, as a company, it is not deliberately feeding the frenzy. Yet one of the paradoxes of the hype dynamic is that even actions ostensibly meant to dampen enthusiasm end up adding to it. Journalists consider Google a highly secretive company, and yet its reluctance to share information only makes people more curious. Its bold statements about managing without regard for quarterly profits and stock price serve only to increase investor interest in the shares. Its founders’ self-consciously idealistic stance plays perfectly to valley true-believers who want to do good even as they’re doing well.

But with veteran spin meisters like Doerr and CEO Eric Schmidt on board, it’s hard to believe there isn’t more than a little calculation behind Google’s flawless public positioning. No matter how good your technology or how fat today’s profits, pulling off an IPO that values your company at $30 billion is not a simple feat. Hype is part of the recipe.

The media, including this newspaper, have played to the script. The Mercury News devoted two-thirds of its April 30 front page to Google’s announcement that it would go public. There were six stories that day, replete with elaborate graphics and photos, three more stories the following day and two more the day after that. BusinessWeek, Fortune and Newsweek all featured Google on their covers over the past year. The sheer volume and prominence of the coverage has often overshadowed cautionary notes sounded in the stories.

And of course the Net itself is now a medium that feeds hype, providing a megaphone for anyone who might want to add to the buzz about a product or company. My friend and former Industry Standard CEO John Battelle is writing a book about Internet search, and his Searchblog has become a must-read for those interested in next-generation Internet technologies. John’s writing is sharp and sophisticated, and he isn’t setting out to stoke the hype. Yet the very existence of his blog can have the effect.

So what’s the harm in all this? If hype spurs investment in technology and innovation, isn’t that a good thing? Yes and no. Hype can help a company attain the momentum it needs to succeed. But it also drives expectations far beyond what can usually be fulfilled, and that sets the stage for an inevitable and painful reversal – as many a dot-com veteran and money-losing tech investor can testify.

In the case of Google, expectations are already way out of hand. The valuation being mooted for the company – $29 billion to $36 billion – basically assumes that it will remain the dominant search service, that its core business will remain highly profitable, and that a wide range of unproven new services will soon contribute substantially to the bottom line. Those are brave assumptions in a market where Microsoft, Yahoo and many other large rivals are just gearing up.

Banking on Hope

Further, even though Google is making itself look good with its IPO auction, it is also defying all good-corporate-governance wisdom by establishing separate share classes to keep control in the hands of the founders. It’s possible they will prove to be as brilliant at building and running a large public company as they have been at developing an effective search engine. But market leadership, technological innovation, ideological purity and huge profits all at the same time?

The notion that Google will somehow “re-ignite the boom” also has more to do with hope and belief than economic reality. The valley’s cycles are driven by a complicated set of factors; boom times happen when new technologies, consumer demand, inexpensive capital and other forces are all aligned. If a strong Google IPO encourages other IPOs, and thus encourages VC investments that are premised on the possibility of an IPO, all that has been “ignited” is another speculative bubble.

Silicon Valley ideology can be self-reinforcing: As long as everyone has faith, all the dreams can come true, at least for a little while. But surely, when it comes to Google, the singular lesson of the recent past applies: Don’t believe the hype.