:::: MENU ::::
Browsing posts in: Technology

Transferring Windows 7 to a new computer

I purchased a new motherboard and CPU in an effort to upgrade both my system processing capability and my hard disk space. My original plan was just to clone an existing 1TB drive onto part of a 2x2TB RAID array, but I was struck with many issues, even with disk cloning. I went through a lot of trouble trying to find a method that worked. So after much pain, here’s what I found:

1) The current stable Redobackup is too old to detect the effective RAID device that my new mobo bios was creating. It refused to select a target device.

2) The current stable Clonezilla also has issues. It detects an md device, but then has issues determining its size and refuses to actually write data to it.

3) The GParted livecd seemed to work best. I used gparted to copy partitions from the original drive to the new drive. I then used dd to copy the boot sector, just in case.

What I found is that Windows 7 gets *REALLY ANGRY* when you just pop an existing installation into a new mobo/cpu. It basically is unstartable. I found an article that suggests running Sysprep with “generalize” and “out-of-box” as a part of transferring to a new machine:


Following these instructions and running Sysprep I then found an issue with the Windows Media Player Network Sharing service – it needed to be stopped in order for Sysprep to work right.

(that link may not work without a login).

So, what I ended up doing thus far:

  1. Clone existing 1TB drive onto new, temporary 1TB drive.
  2. Boot old mobo system with cloned 1TB drive, run sysprep per instructions.
  3. Put sysprepped temporary 1TB drive into new mobo system, boot, let Windows do its first startup, finally install (most) drivers.

I found some issues with some of Asus‘ drivers, so I had to do these steps *AGAIN* in order to get to a working system.

My next step is to clone this now updated 1TB drive onto a 2TB bios-based RAID array and hope for the best. I hope someone finds this information useful!

Sharing a Linux printer to Windows with Samba and Cups

So I recently have been setting up a new Fedora 14 Linux machine at home, which used to run Windows as my primary desktop. I figured that I would keep the printer connected physically to this machine, even though it would no longer be the primary desktop. That meant I had to figure out how to get printing working first with Linux, and then printer sharing.

Getting printing working in Linux was fairly easy. In fact, the printer had already appeared in the list of printers without really doing any work. I recalled from a previous attempt a while back that there are some neat specific tools for HP printing in Linux, and I found them again at the HPLIP project. A quick install of that software in Fedora and I, at least, had that up and running.

Sharing the printer via Samba and CUPS is where it got a little tricky. I had ended up fighting a bunch with the specific configuration of Samba, finding lots of conflicting tutorials with different information that didn’t make sense. I tried a few things, and kept getting permissions errors.

I finally realized that, at least for printing, smb is running/executing as the user “nobody”.  I also noticed that there happened to be a samba-specific folder in /var/spool. I put two and two together and figured that SELinux would be happiest with Samba talking to that folder. So, here’s, ultimately, the set-up I ended up with for smb.conf:

  workgroup = YOURWORKGROUP
  server string = Samba Server Version %v
  security = share
  printing = cups
  printcap name = cups
  browseable = yes
  printable = yes
  public = yes
  create mode = 0700
  use client driver = yes
  path = /var/spool/samba

Adding the printer from Windows proved to be a snap:

  1. Browse to the computer name (//yourlinuxmachinename)
  2. Double click the printer to connect to it
  3. Find the driver it needs
  4. Done!

Hopefully this will help some of you if you find yourselves fumbling around trying to make this sort of thing work.

Creating a Windows 7 bootable USB device from Linux

This really should not have been as hard as it was. I tried in vain to take the Windows 7 Ultimate 64-bit ISO, that I had downloaded from MSDN, and put it on a USB HDD that I had laying around. I have just built a new computer and did not bother to buy an optical drive. Unfortunately, my existing Windows machine was 32-bit Windows XP. This meant running any files from the Windows 7 CD (like the boot sector program) was not a possibility.

I tried various tools like UNetbootin, WinToFlash, MultiBootISOs and others. I also tried some tricks with xcopy that did not seem to work. Since I work for Red Hat and am a Linux person, I happened to have a Linux machine at my disposal. Here’s what I found that worked:

  • I created a bootable (IMPORTANT!) 4GB primary NTFS partition on my 40GB external USB HDD
  • I formatted this partition with NTFS
  • I mounted the Windows 7 ISO and the NTFS partition, and copied the files from the ISO to the USB HDD
  • I used ms-sys to write a Windows 7 MBR to the USB HDD

There was at least one caveat here. I saw, in a place or two, suggestions to use ms-sys against the partition itself. When running ms-sys against a partition, it complained, so I ran it against the base device (in my case, /dev/sdb).

Hopefully this will help someone out there!

    Random thoughts on net neutrality and free markets

    This is basically a copy of a comment I made on Fred Wilson’s blog, but I wanted to put it here so that other people (who might possibly pay attention to me) might see it, too.  So here are some random thoughts:

    – Wireless technologies (WiFi) have evolved extremely quickly because they are largely “unregulated”. No one really owns the spectrum and every company can make a device that can access that spectrum, so they all compete to offer better performance/features/etc. in that space.

    – The only organization that can create a monopoly is a government. Even if one company were to buy up everything and become the sole provider of a service, it still is not a monopoly. Either people will substitute something else in place of that service (walking instead of taking the train, even though it takes a long time), or someone will determine that the barrier to entry, no matter how significant, will ultimately provide a competitive alternative to the existing monopoly.

    – Cable and telephone companies have “near” monopoly over internet access, but it is only because they have already eaten the tremendous costs of infrastructure over time, and happened to be able to retrofit this infrastructure for use as a data transport infrastructure.

    – Verizon seems to think that, despite the start-up cost, there is competitive benefit to them setting up a new higher-speed data transport infrastructure, as one example. Companies like Clear have decided that, despite the lack of comparable performance to other options today, there is a competitve to their investing in the infrastructure for their wireless data service infrastructure.

    – “Net Neutrality” and spectrum auctions will likely serve to neuter the inevitable explosion in over-the-air as an alternative to existing wired data service infrastructures. Instead of net neutrality making the internet and data services better, it will ultimately serve to further reinforce the near monopoly that the cable companies and phone companies already have by eliminating the competitive benefit that the wireless providers can exert over the cable companies by being net neutral. If Comcast were allowed to really really manipulate its network traffic, customers who did not like this would move to services like Clear in favor of a neutral experience as a trade-off to performance. Forcing the net neutrality hand means that this inevitable movement is going to be stifled.

    G1 in the house

    So I have been waiting for something to replace my T-Mobile Sidekick 3 (2.5 if you ask me) for probably close to a year.  I was dis-satisfied with the browser, really dis-satisfied with the email capabilities and my battery had lost some of its charge-holding capabilities.

    When I saw that T-Mobile was going to be the first carrier to release an Android-powered phone, I was really happy.  As details of the dream first started coming out, I was even happier.  When I watched the G-1 press release, I was ecstatic.  Now that I have the phone, I have a shit-eating grin.

    Sure, it will take some getting used to, but the ability to have *real* IMAP mail right out of the box is amazing.  The integration with Google services has been a tiny bit frustrating, but not so bad over all.  For example, I had to export my contacts from Outlook as CSV and import to GMail.  But GMail decided to lop off mailing addresses for people, so I don’t have them in my phone.  Not super important — I can re-populate those as I go.

    One thing I found was that Google introduced an Outlook calendar sync daemon for Windows XP and Vista that works with Outlook ’03 and ’07.  It basically just sits in ram and syncs your GMail calendar to your Outlook calendar, and vice versa.  The nice thing about this?  Now if I enter an appointment on my phone it will go to GMail and then end up in my Outlook — neat!

    If only you could directly delete email from the “del” key on the keyboard!

    I’ll do my best to post little tips and tricks and neat applications I find as I come across them.  Until then, carry on with your bad phone!

    Reference designery, and the proliferation of Android

    Generally-interesting-guy-I-follow, Hank Williams, talked about the Kindle a little bit in his post on Arrington’s great Kindle idea. In it, Hank discusses the merits of creating a reference design based off of Amazon’s Kindle for other companies to emulate and create a class of “readers.” However, he mentions something about Android which I’d like to comment on.

    Interestingly, this is really what Google should be doing with Android. Google is indeed licensing the Android OS to third party phone manufacturers, but by not creating and controling an initial reference design they are leaving important pieces of the design to third parties, in a field (mobile phones) where important design elements can be critical.

    I definitely agree with the Kindle software being offered up as an “OS,” but I’m not entirely convinced on the whole reference design concept. In the reader market, the paucity of players provides for an opportunity for Amazon to “set the bar,” so to speak, on performance and quality.

    When you’re talking about computers and operating systems, its the chip vendors that are setting the reference design.  The i686 architecture, for example, supports a particular instruction set, communications/bus protocols, and etc.  The motherboard vendors have to adopt certain standards, offered by the chip maker in design guides, to make their product.  The OS vendors have to make the OS work with the instruction sets of the CPU and the various bridges/buses and etc.

    Now, when you’re talking about Android, how would the reference design benefit the third party manufacturers? The benefit is already in the fact that the Android contributors have already done the legwork in designing the OS, so all the phone vendors have to do is make sure they pick hardware that can talk to it.

    If you were to look at it from the reverse, Google has extraordinarily limited experience in hardware design. Is it really in their best interests to design some “uberphone” that can run Android when most of the phone makers have a pretty good pulse on what the market wants?

    Sure, one can argue that the phone vendors don’t have a clue, and that’s why the iPhone is “smashing” other handset sales. There are a rush of copycat designs that try to approximate the iPhone in feature set and functionality, and all hit the mark to one degree or another. However when you look at the global scope of phone sales, the overwhelming leader in mobile web browsing (and probably total handsets sold) is actually a non-smart phone — the Motorola Razr.

    I think, to a certain degree, that the “best phone” on the market will always serve as a type of reference design, and the Android OS, in and of itself, will do the same. By being open and transparent and accessible, we will see both a large number of products/apps developed for Android, as well as variations on Android’s components that will make each handset unique in its own way, should the manufacturer choose to do so.

    The death of the record label?

    A lot of people these days seem to be writing about how record labels are dying or how record labels are evil or how the record labels want too much money. And it seems that the fact of the matter is that they may be right on all counts.

    In the olden days, record labels existed for a reason – to help fledgeling artists make it big and to introduce them to a greater population of consumers than the artist could reach by themselves. This required significant investment of funds. Hoewver, it seems that, today, this model is broken.

    Artists obviously still require promotion efforts to get noticed. However, it appears that the avenues through which this is occuring are changing. The basic premise still applies:

    1. Put together a band
    2. Write some songs
    3. Go on tour
    4. Repeat steps 2-3 until
    5. Get noticed, get paid

    It is the “get noticed” step that seems to be changing. A few weeks ago, an article popped up about how Electronic Arts was signing artists to a music label of sorts. In this article, Priya Ganapati writes:

    Until now, game companies worked with recording labels or publishing firms to get licensed or original music, often opting for new and independent artists in an attempt to inject fresh, interesting and undiscovered music in their games.

    Priya then goes on to talk about how EA has basically cut out the middle man of the record label by forming their own “label” of sorts, Artwerk. Additionally, this same model is being adopted by some larger corpriations with huge marketing budgets, like Apple. Why pay the record label to license music from the artist and then have to do your own marketing for the game/product, for which the label will ultimately profit from the rise in popularity of the artist? That is simply a model that doesn’t make any sense.

    Another broken model is radio. Traditionally, record labels were the might behind an artist’s radio debut, pushing out promos to radio stations all across the country. Now, with the advent of the internet and social media, it is far easier for consumers to access artists from all across the world and become exposed to their music. The RIAA is destroying “internet radio” by forcing the companies that play the artists of the big labels to pay exhorbitant fees/royalties back to the record lables. So, instead of protecting their artists and making more money, what RIAA will do is force artists to band together and form micro-labels, potentially backed by the big corporations and game makers, perhaps in a form of symbiotic-promotional entity that promotes the music/game/product in order to promote the music/game/product.

    But wait, micro-labels have existed forever. Let’s revise the table from above, shall we?

    1. Put together a band
    2. Write some songs
    3. Go on tour
    4. Repeat steps 2-3 until
    5. Get noticed, get paid by micro label for distribution/help/etc.
    6. Write songs
    7. Go on tour
    8. Repeat 6-7 until larger label comes along and buys you out

    However, looking above, considering the ease of digital distribution and the usefulness of the internet, it appears that the micro-label will reign supreme in the foreseeable future, the CD and big labels will wax away and die, and iTunes will collapse. Why?

    Music should not be free. This is just a silly hippie pipe dream. Sure, artists should make money to play concerts and sell merchandise, but their music has value outside of being performed live (i.e. recorded). The issue is that the recording industry model has established a precedent for what the value of music is, and the paradigm has shifted to make it such that the recording industry no longer controls the value of music because they no longer have the exclusive hold on its distribution.

    It is not that music is any less valuable in and of itself. It is more that there’s less need for someone to spend huge money trying to make music popular, which means there’s less expense to recoup, which means the savings can be passed on to the consumer.

    Obviously these are gross exaggerations and generalizations, but there is a point to all of this. The industry is changing, and the big recording companies are scared, but they’re also too stupid to change, drunk on decades of fat profits that they see eroding before them.  Poor fatties.

    Netbooks – A revolution

    Technology blogger and generally interesting fellow Hank Williams recently blogged about the new Latitude-On option laptops from Dell.  He writes:

    The basic idea of the new Latitude is that the machine will have a second ARM based processor and Linux operating system along side the standard Intel processor and Windows OS. This machine within a machine will provide a super fast, lightweight, battery friendly environment for doing things like email, web browsing, and perhaps other communications tasks. It will be “instant on”, so you will always be able to get to your basic functionality, and yet you will get a battery life measured in days and not hours when in this mode.

    While Hank is more concerned with the innovation being brought forth by Dell, it brings up a question in my mind: Who is going to use this? The thing I wonder about is the quality of the user experience and the interactivity with “corporate standards.”

    For example, this type of machine will likely apply to world travellers and executive types.  Who else would need their laptop battery to last so long just to be able to do things like check email or quickly browse the web?

    If that is, in fact, the target market for these machines, they better darn well make sure that the box can access things like Exchange mail servers and/or Outlook Web Access. And we all know how well that sort of thing works in Linux at this time.

    Not that I have anything against Linux… in fact I work for Red Hat currently!  However, I realize the integration troubles at the enterprise desktop level, so you can understand where my concern stems from.

    And, if this is not the intended use-case for this machine, is such a laptop really going to be able to compete with the new “netbooks” out there?  Sure it may offer significantly increased battery life, but at what cost?  Without a Openoffice.org or some other word-processing suite available, it will not be very useful for students trying to take notes on a laptop and get a full class-day’s battery life out of it.

    It seems like companies are slowly marching towards a pure “web terminal” type of portable thin-client. I wonder if we’ll ever see a netbook that simply boots its OS from the cloud and has zero storage.

    The mobile web – aging dinosaur

    There was recently a post over at Ajax Blog about Japan’s super-advanced mobile web. Serkan Toto discusses some of the intricacies of the unique mobile web that has evolved in Japan, a country where most people don’t have PCs and almost everyone uses their cellphone to browse the web.

    Toto writes:

    The availability of cutting-edge phones is one reason why many Japanese people don’t own a PC but would rather browse the web exclusively on mobile devices. And it’s not just for short bursts. They never write SMS either but rather thumb-text push-mails, often containing little icons, emoticons and coded youth slang acronyms. Booking flights online, ordering clothes, auctioning off used stuff, gaming, paying for movie tickets via direct debit: all of this has been possible on Japanese mobile phones for years now.

    This is a very valid point. Having spent time in Japan I can definitely say that I, myself, wrote rather long messages on my phone to friends. I also spent time writing full-length emails to people in Japan who were using their phone email address as their sole email access.

    However, despite the fact that Japan has a “relatively sound regulatory policy” when it comes to the mobile web, I see the “mobile web” itself as an aging dinosaur, and the iPhone is clearly the reason why.

    Firstly, I feel that Japan is relatively unique in its lack of home PC users. The rest of Europe is fairly PC-heavy (just look at the main contributors to open source software and Linux itself), and they certainly have embraced the cellphone. Just this morning a Businessweek author mentioned how she can rent a bicycle in Germany using her mobile. So, the overwhelming majority of current web users probably get their access from both a PC and, potentially, a mobile.

    Secondly, the iPhone clearly makes a targeted reference to “the web on your phone.” Not the mobile web, but the real web. The big bad nasty web that has awesome graphical content, multimedia and all the whiz-bang you could ask for. In a world that is continually going “Web 2.0” where AJAX is almost a requirement to build a modern site, why would an organization want to spend all the time and effort to develop a supremely awesome website, only to have to develop and attrocious dumbed-down lo-fi version for some poor schlub’s tiny little phone?

    Let’s face it, folks. The iPhone demonstrates that the world is ready for UMPCs. Because that’s really what the iPhone is. Sure it is a phone, but it basically runs MacOS and can download and install new applications (software) and can do lots of PC stuff – including browse the web (the real web).

    As display technology gets cheaper and materials science gets better and battery technology improves, we will definitely see more and more UMPC-like mobile “phones” on the market. As more and more individuals have acess to real software and real web on their mobile device, the need for a separate “mobile” web will fade away.

    While WML is an excellent standard and is great in practice, its usefulness is running out, in my opinion. Mobiles just don’t have the same limitations today that they did even just a few years ago. With x86-based mobile devices on the horizon with the evolution of things like the Atom processor, can the “mobile web” really survive?

    Enterprise Social Networking – Why build when you can buy?

    In a recent post on TechCrunch, Erick Schonfeld discusses how an outside organization has designed a new Facebook application to not only replace the university-related features that Facebook removed, but to do so with tighter and direct integration to the university systems.

    Schonfeld brings up an excellent point in his closing statements (or at least, I’m extrapolating one):

    Inigral will charge a few dollars per student, and in return schools get a way to interact with their students on Facebook in a way that they can control. It is really a group management app for instructors, athletic teams, and student organizations to contact their members and manage events through a forum students are already using anyway.

    One of the hot things going on in the enterprise space is that your local MegaCorp, Co. is trying to implement its own internal social network and/or microblog for the benefit of internal employee communication and group performance.  There is no reason that another company couldn’t copy Inigral‘s model except extend it with features suited to the enterprise space.

    Think about this for a moment.  If IBM designs an internal social network application (I shouldn’t say if, they already have done so) to offer features and functions to its 338,000+ worldwide employees, and it costs them $10/year/employee to build the infrastructure and maintain and administer it, that’s $3.3M/yr that they are spending.  This is probably a fairly deflated figure in reality.

    In fact, IBM went so far as to create a whole software suite – Lotus Connections – that has many “Web 2.0” and social networking features built in.  But the reality of the fact is that this suite smells an awful lot like Facebook in an internal enterprise-packaged form.  The infrastructure, maintenance and administration costs are still there.

    Now, if a company does something like Inigral and offers enterprise-related features, but can leverage the existing infrastructure of Facebook and charge substantially less, that is considerable savings.

    Now, the question really is – are there truly “enterprise” features that could be offered within the confines of a Facebook application that an enterprise would really want to access?  Are there Hippa/Sox or other compliance/auditing concerns that would make such an endeavor impossible to implement successfully?  Obviously security concerns would have to be dealt with in that only organization employees can view information about other employees within Facebook.

    Hopefully these thoughts will provoke some ideas in all of you.  I’d love to hear your thoughts.