Jump to content

Ergh Windows 10


Recommended Posts

Plonky because of my photography business I need windows to run Photoshop etc. I do have a Mac as well by the main PC is windows as all the software we use to do everything photography related works on windows. I am not going to buy it all again and last time I looked you can not mix Mac and Windows licenses. You have to buy everything again.

Link to comment
Share on other sites

2 minutes ago, JDP said:

Plonky because of my photography business I need windows to run Photoshop etc. I do have a Mac as well by the main PC is windows as all the software we use to do everything photography related works on windows. I am not going to buy it all again and last time I looked you can not mix Mac and Windows licenses. You have to buy everything again.

Yup, I do believe in using the right tool for the job, and Windows still has its uses here and there :-) My post isn't the usual rah-rah use linux you nitwit post... it's a humourous anecdote about just how old I'm getting in this business.

Link to comment
Share on other sites

I really liked Win95/98/XP and loved Win7.

Since Win10 was released, with the bulk of our and clients moving over with the free upgrades last year to Win 10, I will never go back to Win7.

Win10 is much faster plus, same as Win7, I do not and never have felt nor experienced nor seen any insurmountable irritations with any of the above on Win10.

What I DO know is that if Win10 gives drama, check the hardware. That is the one rule I have with client Pc's. If Win10 has an issue, I will bet the cost of the new hardware that the problem was with the older hardware. To date I have not lost that bet.

Have moved quite a few people, using their older PC's, by replacing the old HDD with a new SSD drive with great success over to Win10 - the in-place upgrade.

The combination of older boards with a newer SSD drive (if the machine can handle it) with Win10, makes the older machines fly like brand new.

Just saying.

Link to comment
Share on other sites

Yup, an SSD is a wonderful piece of tech. You must just keep meticulous backups. It's not like the old drives where it started spitting out Klingon in syslog well in time for you to get your data off... when it fails, it fails completely :-)

Oh the old magnetic drives. They would have a bunch of spare sectors and the internal firmware would swap it out automatically. Beautiful tech, except it can only swap a sector on a write cycle. So you'd get these beautiful read failures on a single sector and all you had to do is rewrite the file and your drive would be good as new.

So one day I had a RAID1 array failing. Usually when a RAID1 fails, you do a hot-remove on the drive, and then hot-add it back in. The array then rebuilds and because it rewrites the entire drive with data from the other healthy drive, all your bad sectors are swapped out and no drive replacement is necessary (unless this becomes too frequent).

But not this day. On this day it went all the way to 99% and then failed again: There was a bad sector on the health drive right before the end.

So off I went. There is a HOWTO on the internet helping you to figure out what file uses that sector. Except I was using LVM (logical volume manager), so there was another mapping in between. So after a lot of mapping I figured out that the bad block was in fact unused, there was no data stored there. Because the RAID driver duplicates at block level rather than file-system level, it still tries to duplicate this unused (but now bad) block to the other disk. So I carefully typed the following, sent up a little prayer, and hit enter:

dd if=/dev/zero bs=512 count=1 seek=(block number here) of=/dev/sdaX

Then I rebuilt the array again and this time it succeeded.

We decomissioned that machine over a year later without any further bad blocks.

Link to comment
Share on other sites

1 hour ago, plonkster said:

Yup, an SSD is a wonderful piece of tech. You must just keep meticulous backups. It's not like the old drives where it started spitting out Klingon in syslog well in time for you to get your data off... when it fails, it fails completely :-)

Oh the old magnetic drives. They would have a bunch of spare sectors and the internal firmware would swap it out automatically. Beautiful tech, except it can only swap a sector on a write cycle. So you'd get these beautiful read failures on a single sector and all you had to do is rewrite the file and your drive would be good as new.

So one day I had a RAID1 array failing. Usually when a RAID1 fails, you do a hot-remove on the drive, and then hot-add it back in. The array then rebuilds and because it rewrites the entire drive with data from the other healthy drive, all your bad sectors are swapped out and no drive replacement is necessary (unless this becomes too frequent).

But not this day. On this day it went all the way to 99% and then failed again: There was a bad sector on the health drive right before the end.

So off I went. There is a HOWTO on the internet helping you to figure out what file uses that sector. Except I was using LVM (logical volume manager), so there was another mapping in between. So after a lot of mapping I figured out that the bad block was in fact unused, there was no data stored there. Because the RAID driver duplicates at block level rather than file-system level, it still tries to duplicate this unused (but now bad) block to the other disk. So I carefully typed the following, sent up a little prayer, and hit enter:

dd if=/dev/zero bs=512 count=1 seek=(block number here) of=/dev/sdaX

Then I rebuilt the array again and this time it succeeded.

We decomissioned that machine over a year later without any further bad blocks.

oh how wonderful Linux is ;)

Spoil yourself a run ZFS instead of software RAID, if your distro / kernel supports it. 

Link to comment
Share on other sites

4 minutes ago, SilverNodashi said:

oh how wonderful Linux is ;)

Spoil yourself a run ZFS instead of software RAID, if your distro / kernel supports it. 

Last time I looked at ZFS it was only supported by Solaris (and OpenSolaris), or as we call it in the biz: Slowlaris. Then at some point someone made a Fuse (userspace implementation) driver for it... and that is where I lost track. Problem is in the incompatible licensing, so no distro supports it out of the box.

At the moment I'm happy enough with ext4. It solved some issues I had with ext3. We had a client that stored millions of small files (1k or 2k), and we would always run out of inodes. In layman's terms, if you think about a conventional library with a paper card catalog system: This happens when you have too many thin books and you have oodles of shelf space left but no space in the catalog. With ext2 and ext3 we could allocate more inodes (smaller block size) but that would limit the total size of the file system. With ext4, we get to have our cake and eat it.

Even played with things like Redhat's GFS. That was a disaster. Tried Oracle's OFCS2, never got it working properly. At some point the extra complexity just don't add any more value :-)

Link to comment
Share on other sites

2 hours ago, plonkster said:

Last time I looked at ZFS it was only supported by Solaris (and OpenSolaris), or as we call it in the biz: Slowlaris. Then at some point someone made a Fuse (userspace implementation) driver for it... and that is where I lost track. Problem is in the incompatible licensing, so no distro supports it out of the box.

At the moment I'm happy enough with ext4. It solved some issues I had with ext3. We had a client that stored millions of small files (1k or 2k), and we would always run out of inodes. In layman's terms, if you think about a conventional library with a paper card catalog system: This happens when you have too many thin books and you have oodles of shelf space left but no space in the catalog. With ext2 and ext3 we could allocate more inodes (smaller block size) but that would limit the total size of the file system. With ext4, we get to have our cake and eat it.

Even played with things like Redhat's GFS. That was a disaster. Tried Oracle's OFCS2, never got it working properly. At some point the extra complexity just don't add any more value :-)

ZFS is native in the BSD kernels and Ubuntu has been supporting it for quite some time as well. 

Link to comment
Share on other sites

39 minutes ago, SilverNodashi said:

ZFS is native in the BSD kernels and Ubuntu has been supporting it for quite some time as well. 

Aaaah ok. Well these days we host with Cloud companies so there is little need for fancy filesystems :-)

Those poor windows people though, they have to make do with FAT, NTFS or exFAT. I suppose three file systems ought to be be enough for anybody. :-)

Link to comment
Share on other sites

1 hour ago, plonkster said:

Aaaah ok. Well these days we host with Cloud companies so there is little need for fancy filesystems :-)

Those poor windows people though, they have to make do with FAT, NTFS or exFAT. I suppose three file systems ought to be be enough for anybody. :-)

The Cloud platforms generally still run on ZFS, EXT[3,4] and often BRTFS. I remember being stuck in a datacenter for 4 days trying to recover a XEN Virtual Machine, on top of a LVM cluster, setup on a RAID-5 array. We took shifts watching the RAID-5 rebuild, hoping the array won't fail midway. Last time I ever used RAID-5.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...