• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

Advantages to joining my network to my domain?

Shane2943

Well-Known Member
So, About 2 years ago I bought a domain name mostly for email, but I have a registered domain name. Fast forward to present, I got a server from work (scraped) that I have set up with linux and running on my home network. It's primary purposes are to be a router and a NAS which it already is. However, this thing has 2 quad-core Xeons (E5405) and 8GB of ram, I can't help but feel it's being a tad under-used. ;) It's an IBM x3500.

I only have 2 other PCs besides the server on the network and it's all workgroup based right now. Is there any advantage for me to switch the network over from workgroup to my domain name? What other nifty things can I have this server do?

Brainstorming nerds UNITE! :D
 
Windows domains are completely different than DNS domains. They're also deprecated by Active Directory. So the short answer is that the question is a non sequitur.

Although Samba does allow you to set up Windows Active Directory, and even maybe even legacy Windows domains and WINS, using one computer to be a domain controller for only two clients is total overkill, and more trouble than it's worth. You'll gain nothing for all that work, and possibly create problems whenever the server is not running.

If you can't think of any ways to use those wasted CPU cycles on your own, probably the best thing you can do is to donate the machine to a charitable organization that's in need of such a machine. If you aren't the altruistic type, you might use it to transcode video files for use on your Android device. All of those cores could be brought to bear in converting bulky MPEG-2 video from DVDs and DVRs into compact H.264 files for your Android device. That's what I'd do if I had it.
 
Thank you for the reply! I thought about donating it, but transcoding movies to play on my media PC (the purpose of it being a NAS: storing terabytes of transcoded DVDs in my collection) on 8 cores sounds very attractive.

You explanation of Windows domain vs DNS domain makes sense. I'm a networking newb.

Thanks, man. :)
 
Thank you for the reply! I thought about donating it, but transcoding movies to play on my media PC (the purpose of it being a NAS: storing terabytes of transcoded DVDs in my collection) on 8 cores sounds very attractive.

You explanation of Windows domain vs DNS domain makes sense. I'm a networking newb.

Thanks, man. :)
You're welcome! It's gratifying to know that my MCSE training and work with Paul Vixie is still paying dividends.

If you're a self-described networking newb, I'm guessing that you're also fairly new to Linux. Therefore you might not be aware of the ability to script cron jobs and use utilities like Mencoder to transcode videos.

What are your plans for these ripped DVDs? I know that some people are using Matroska to save their DVDs complete with menus, so the experience will be identical to putting the disc in a DVD player. I haven't been interested in that myself, and haven't tried it. You'll probably have to give up that feature to make small video files for portable devices. The good news is that you can still keep the full rips on your server as well as H.264 versions for portable use as long as you have the storage capacity.

Don't be shy about asking how to do something in Linux. I'm in the process of building my 6TB RAID array to make room for new HD video files that are maxing out my old 3TB NAS that I put into place not long ago when my 1.6TB RAID array got full. Although I plan on keeping most of my videos in their original format on the largest array, I've been meaning to repurpose one of my smaller arrays to hold small file MPEG-4 versions for "grab and go" so I'll always have something to watch on the road. I keep on saying I'll get to automating the process. Maybe I'll finally get motivated if someone else asks how to do something like that.
 
You're welcome! It's gratifying to know that my MCSE training and work with Paul Vixie is still paying dividends.

If you're a self-described networking newb, I'm guessing that you're also fairly new to Linux. Therefore you might not be aware of the ability to script cron jobs and use utilities like Mencoder to transcode videos.

What are your plans for these ripped DVDs? I know that some people are using Matroska to save their DVDs complete with menus, so the experience will be identical to putting the disc in a DVD player. I haven't been interested in that myself, and haven't tried it. You'll probably have to give up that feature to make small video files for portable devices. The good news is that you can still keep the full rips on your server as well as H.264 versions for portable use as long as you have the storage capacity.

Don't be shy about asking how to do something in Linux. I'm in the process of building my 6TB RAID array to make room for new HD video files that are maxing out my old 3TB NAS that I put into place not long ago when my 1.6TB RAID array got full. Although I plan on keeping most of my videos in their original format on the largest array, I've been meaning to repurpose one of my smaller arrays to hold small file MPEG-4 versions for "grab and go" so I'll always have something to watch on the road. I keep on saying I'll get to automating the process. Maybe I'll finally get motivated if someone else asks how to do something like that.

I'm not all that new to Linux. Been using it for about 10 years or so. Since the Mandrake 9 and Fedora 2 days. I'm no expert by any means, but I know my way around ok. :)

All 3 machines in my network run Opensuse 12.1 (except the new server which runs Opensuse 12.2). My media PC is a core i5 home built machine running XBMC with a single 2TB hard drive (and 160GB OS drive). The 2TB drive is already about 75% full so the need for a NAS is looming. I want to buy 4 2TB HDDs and run them in the x3500 server in 2 RAID-1 arrays. I'm open to suggestions here. I know the Adaptec RAID controller in the server supports RAID 0, 1, 5, and 10. Not sure about 6. Haven't played with it that much.

For transcoding, I use Handbrake (which I believe runs mencoder under the hood). With the core i5, Handbrake can transcode a full-length movie (DVD, not Blueray) in about an hour. The dual Xeon server can do the same in about half that time. :D I definitely use MKV containers and H.264 format for the movies which allows me to take advantage of VDPAU CPU>GPU offloading for decoding playback. XBMC runs mplayer under the hood and mplayer supports VDPAU natively with a NVidia GPU. Linux is awesome. :smokingsomb:

I've been trying to think of other things I can have this server do while it's not transcoding (which is most of the time). Been thinking about distributed spare-CPU-cycle projects like folding@home or SETI. Dunno.
 
I'm not all that new to Linux. Been using it for about 10 years or so. Since the Mandrake 9 and Fedora 2 days. I'm no expert by any means, but I know my way around ok. :)

All 3 machines in my network run Opensuse 12.1 (except the new server which runs Opensuse 12.2). My media PC is a core i5 home built machine running XBMC with a single 2TB hard drive (and 160GB OS drive). The 2TB drive is already about 75% full so the need for a NAS is looming. I want to buy 4 2TB HDDs and run them in the x3500 server in 2 RAID-1 arrays. I'm open to suggestions here. I know the Adaptec RAID controller in the server supports RAID 0, 1, 5, and 10. Not sure about 6. Haven't played with it that much.

For transcoding, I use Handbrake (which I believe runs mencoder under the hood). With the core i5, Handbrake can transcode a full-length movie (DVD, not Blueray) in about an hour. The dual Xeon server can do the same in about half that time. :D I definitely use MKV containers and H.264 format for the movies which allows me to take advantage of VDPAU CPU>GPU offloading for decoding playback. XBMC runs mplayer under the hood and mplayer supports VDPAU natively with a NVidia GPU. Linux is awesome. :smokingsomb:

I've been trying to think of other things I can have this server do while it's not transcoding (which is most of the time). Been thinking about distributed spare-CPU-cycle projects like folding@home or SETI. Dunno.
It looks like you have things sorted out well enough that I can't be any more help to you regarding the transcoding thing.

As for NAS, since you have a general purpose machine that can be a file server in an instant, that's already running the new Linux kernel with awesome disk I/O, and is lightly loaded, I'd say that adding storage to the server is far better than buying a NAS appliance. That's one great way to take advantage of the spare RAM, although disk and network I/O aren't very CPU intensive now that they all use DMA.

If you have the same 4-port Adaptec controller that I have, then you're off to a good start. That's where I'm going to be replacing my 4x500GB array with 4x2TB on my Linux server box. Frankly I think it's a waste of both disk space and your RAID card's capability to do simple RAID 1 mirroring. RAID5 is the way to go, especially when you have a HBA that does all the calculations on the card. You get the most total storage space, and will not lose a single bit of data if you have a single drive failure. One of the reasons why I went with the Adaptec was because of its great RAID5 performance. Use it!

A distributed computing project looks like an ideal way to put your spare resources to good use. It just so happens that there's a Phandroid Folding@Home team right here. I have no opinions on space aliens, but think that if they exist and want our attention, they'll be able to contact us just fine. IMHO the distributed apps that are curing disease and making life better on our own planet are more worthy causes, and Folding@Home is one of the better ones.
 
It looks like you have things sorted out well enough that I can't be any more help to you regarding the transcoding thing.

As for NAS, since you have a general purpose machine that can be a file server in an instant, that's already running the new Linux kernel with awesome disk I/O, and is lightly loaded, I'd say that adding storage to the server is far better than buying a NAS appliance. That's one great way to take advantage of the spare RAM, although disk and network I/O aren't very CPU intensive now that they all use DMA.

If you have the same 4-port Adaptec controller that I have, then you're off to a good start. That's where I'm going to be replacing my 4x500GB array with 4x2TB on my Linux server box. Frankly I think it's a waste of both disk space and your RAID card's capability to do simple RAID 1 mirroring. RAID5 is the way to go, especially when you have a HBA that does all the calculations on the card. You get the most total storage space, and will not lose a single bit of data if you have a single drive failure. One of the reasons why I went with the Adaptec was because of its great RAID5 performance. Use it!

A distributed computing project looks like an ideal way to put your spare resources to good use. It just so happens that there's a Phandroid Folding@Home team right here. I have no opinions on space aliens, but think that if they exist and want our attention, they'll be able to contact us just fine. IMHO the distributed apps that are curing disease and making life better on our own planet are more worthy causes, and Folding@Home is one of the better ones.

I'm not sure of the model of this adaptec controller, but it actually has no ports. It's a card with battery that goes in a slot on the mobo of this server and then uses the 8 slot SAS back plane. I'll take a pic of the server and upload it in a bit. The drives are supposed to be hot swappable as well. They're all front slot loaded drives, no internal drives.

I'm also a RAID newb so since you seem to know a bit about it, I'm gonna pick your brain (don't worry, my hands are sanitized ;) ). :D. I was thinking about RAID 5, but I kept reading on various blogs and forums that it was a bad idea due to the fact that I'll be using consumer grade SATA drives and because of their large size. RAID 5 only allows one drive to fail, and I was reading that it is possible, and even likely, that another drive would fail during the rebuilding process, especially if the drives are large (like 2TB drives) and bought at the same time. If that happens, the whole array is lost. Plus, what would happen if something went wrong with the RAID controller? I do not believe I can take the drives and individually extract data off of them in case of controller failure. I believe this is possible with RAID 1 because there's no parity info on the drives, they're just mirrored. Now, I totally agree that RAID 1 is the most innefficient use of the disks, but I'm paranoid of failures. RAID 6 seems a bit promising, but I need to find out of this controller even does 6.

What do you think? The only amount I know about RAID is what I've read. I do mess with RAID a little bit at work, but my experience is very limited.

And I'll check out that folding at home team here! :)

Here's pics of the server. It's an enterprise-grade IBM x3500:
(The slots in the front hold SAS or SATA drives)
2012-09-29112833.jpg


Here's the inside. That card at the top of the pic where the battery is is the RAID card.
2012-09-29113045.jpg
 
I'm not sure of the model of this adaptec controller, but it actually has no ports. It's a card with battery that goes in a slot on the mobo of this server and then uses the 8 slot SAS back plane. I'll take a pic of the server and upload it in a bit. The drives are supposed to be hot swappable as well. They're all front slot loaded drives, no internal drives.
Then what you have is way beyond my budget, and presumably at least as good as my Adaptec HBA. No worries.

If you have 6 or 8 drive slots, then your options are a lot better. For example you can make a pair of identical RAID0 stripe sets (for smoking fast I/O) and mirror the two stripe sets into a RAID 0+1 arrangement. The bad news is that you have half the total capacity that you'd have from the sum of all drive capacities. The good news is that you'd have great performance and fault tolerance.

I'm also a RAID newb so since you seem to know a bit about it, I'm gonna pick your brain (don't worry, my hands are sanitized ;) ). :D. I was thinking about RAID 5, but I kept reading on various blogs and forums that it was a bad idea due to the fact that I'll be using consumer grade SATA drives and because of their large size. RAID 5 only allows one drive to fail, and I was reading that it is possible, and even likely, that another drive would fail during the rebuilding process, especially if the drives are large (like 2TB drives) and bought at the same time.
In nearly 30 years of doing professional IT work, I've never seen two drives fail completely at the same time. Just because you read a lot of people write something on the Internet doesn't mean it must be true. It's more likely that a lot of people are repeating the same ignorant rumors.

"SATA" does not automatically mean "consumer grade" with the connotation that it's less reliable. I've been using the more costly Seagate "network server" (N) drives along with the "desktop" (AS) drives, and have replaced both types with larger capacity drives before either failed. That's why I quit paying the premium price for the N drives and went with the cheaper AS drives for my RAID arrays. After all, RAID stands for "redundant array of inexpensive disks". If you're spending a bundle on the drives for your RAID array, what's the point?

If your RAID5 array has only the bare minimum of 3 drives, then yes you will need to "replace and rebuild" ASAP. But if you have a larger RAID5 array, you can make it more reliable. RAID6 will sustain multiple simultaneous failures, and having hot standby drives online to start the rebuild process automatically means that you can have drives failing constantly and still maintain data integrity as long as you have an unlimited supply of new drives to replace the failed ones with.

But IME with RAID and doing regular backups, I've never lost a single bit of data using plain old RAID5. I think once I had a server page me to let me know that a drive had failed, and I went back to sleep, came in to work the next day and replaced the failed drive with no problem.

Plus, what would happen if something went wrong with the RAID controller?
If you're worried about that, then buy another RAID controller and create a duplex RAID1 array. You could also go with the much more expensive and much less capacious SAS drives. If that's still not good enough, you could use one RAID controller to create a mirrored RAID1 array, and the second RAID controller to make a second mirrored array that you duplex with the first for RAID 1+1. Or, if you have the slots, you could get four RAID controllers, put a RAID1 mirror set on each, and duplex all four into a massively redundant multi-level RAID array with very little storage capacity.

If you cannot risk the slightest chance of a failure, go buy multiple clusters of mainframes, in various locations all over the world, using dedicated high speed data connections to synchronize them at all times. If you have that kind of money, of course.

Most people find that their budgets dictate something less than the most extreme fault tolerance. And IME most of them never experience catastrophic losses as long as they keep up a good backup routine. But it's your money and your choice. If you can afford it, don't let me stop you.

The bottom line is that you'll never achieve perfection at any cost. If you really have the need and the money, my advice is to hire a good consultant to set it up for you. If this is less than "life or death" critical, you might consider a setup that's more in line with whatever you're using the system for.
 
Then what you have is way beyond my budget, and presumably at least as good as my Adaptec HBA. No worries.

If you have 6 or 8 drive slots, then your options are a lot better. For example you can make a pair of identical RAID0 stripe sets (for smoking fast I/O) and mirror the two stripe sets into a RAID 0+1 arrangement. The bad news is that you have half the total capacity that you'd have from the sum of all drive capacities. The good news is that you'd have great performance and fault tolerance.

In nearly 30 years of doing professional IT work, I've never seen two drives fail completely at the same time. Just because you read a lot of people write something on the Internet doesn't mean it must be true. It's more likely that a lot of people are repeating the same ignorant rumors.

"SATA" does not automatically mean "consumer grade" with the connotation that it's less reliable. I've been using the more costly Seagate "network server" (N) drives along with the "desktop" (AS) drives, and have replaced both types with larger capacity drives before either failed. That's why I quit paying the premium price for the N drives and went with the cheaper AS drives for my RAID arrays. After all, RAID stands for "redundant array of inexpensive disks". If you're spending a bundle on the drives for your RAID array, what's the point?

If your RAID5 array has only the bare minimum of 3 drives, then yes you will need to "replace and rebuild" ASAP. But if you have a larger RAID5 array, you can make it more reliable. RAID6 will sustain multiple simultaneous failures, and having hot standby drives online to start the rebuild process automatically means that you can have drives failing constantly and still maintain data integrity as long as you have an unlimited supply of new drives to replace the failed ones with.

But IME with RAID and doing regular backups, I've never lost a single bit of data using plain old RAID5. I think once I had a server page me to let me know that a drive had failed, and I went back to sleep, came in to work the next day and replaced the failed drive with no problem.

If you're worried about that, then buy another RAID controller and create a duplex RAID1 array. You could also go with the much more expensive and much less capacious SAS drives. If that's still not good enough, you could use one RAID controller to create a mirrored RAID1 array, and the second RAID controller to make a second mirrored array that you duplex with the first for RAID 1+1. Or, if you have the slots, you could get four RAID controllers, put a RAID1 mirror set on each, and duplex all four into a massively redundant multi-level RAID array with very little storage capacity.

If you cannot risk the slightest chance of a failure, go buy multiple clusters of mainframes, in various locations all over the world, using dedicated high speed data connections to synchronize them at all times. If you have that kind of money, of course.

Most people find that their budgets dictate something less than the most extreme fault tolerance. And IME most of them never experience catastrophic losses as long as they keep up a good backup routine. But it's your money and your choice. If you can afford it, don't let me stop you.

The bottom line is that you'll never achieve perfection at any cost. If you really have the need and the money, my advice is to hire a good consultant to set it up for you. If this is less than "life or death" critical, you might consider a setup that's more in line with whatever you're using the system for.

Amazing what companies will throw away. That's how I got this server. I also got a 2nd identical server that I will use for parts if/when stuff fails on this one. Both were runners pulled from sites because of upgrades. Nothing wrong with either one.

Great points all around. The bolded part made me laugh! LOL I'm not that crazy. Just fearful of failures. But you made excellent points about RAID5, and since this server does shut off during the day, that will save some drive life. Dunno if I can set this thing up to spin down drives that aren't being accessed.

I'm going to re-explore RAID 5. Thanks for the info, friend. :)

Couple more questions: when a drive fails in RAID, do I get an alert through the OS? How does that work?

Also, how does one back up 4+TB of data (without a second NAS)?
 
Amazing what companies will throw away. That's how I got this server. I also got a 2nd identical server that I will use for parts if/when stuff fails on this one. Both were runners pulled from sites because of upgrades. Nothing wrong with either one.
I was meaning to ask you about that. I've gotten some corporate hand-me-downs, but nothing approaching something as new and powerful as what you have. The closest I ever came was getting half off on a $10,000 server because I made the purchasing decisions for a $10,000,000 per year client. You are one lucky guy!

Great points all around. The bolded part made me laugh! LOL I'm not that crazy. Just fearful of failures.
The thing about the bolded part is that it's a fairly common practice for large corporations. I wasn't kidding at all. There used to be a company called Comdisco that specialized in colocating mainframe computers at their various facilities around the world for companies like airlines and other businesses that couldn't afford to be down ever. If you have the funds, I can arrange something similar for you as part of my consulting business.

But you made excellent points about RAID5, and since this server does shut off during the day, that will save some drive life. Dunno if I can set this thing up to spin down drives that aren't being accessed.
I wouldn't do that if I were you. I've been a motorsports fan for most of my life, and have learned a thing or two about racing and the cars that are used to do it. Many types of full-on race motors use a device called a pre-oiler to force lubricant oil over bearing surfaces before the motor turns over to reduce friction. This is done because 99% of the wear and tear done on regular car motors takes place in the first few seconds of operation, before the lubricant can circulate to where it's needed. This rule holds true for pretty much anything that spins on bearings, including computer hard drives.

You're not going to find a HD with a pre-oiler, so the worst thing you can do to the longevity of a HD is to spin it up and down often. Premature wear due to lubricant starvation is a well understood cause of failure, but it's not the only one. Heat also plays a role not only in HD life, but in performance as well. Bearing surfaces that are cold can have enough space that they bang against each other until they swell up to their operating tolerances. This can cause galling and premature bearing failure. In addition, as data density grows, modern HDs absolutely must be calibrated from time to time because their tracks will move out of range as minute variances in the size of the platter occur. Because of this most HD storage is kept running constantly in order to reduce wear and maintain thermal stasis.

If you must shut down your machine daily, I suppose you'll need to take extra precautions like doing an incremental backup prior to shutdown. Under no circumstances should you set your drives to spin down if they don't need to. That's strictly an energy-saving move for battery powered devices.

I'm going to re-explore RAID 5.
Please do. RAID5 is the most economical, and IME it's the most you're likely to need as long as you keep a good backup regime. RAID6 adds even more security. Some complain that RAID5/6 is "too slow" compared to massive RAID0 stripe sets. While that may be true, the throughput provided by a properly designed RAID5 array is adequate for all but the most extreme (read: gamer) applications. The cache RAM (that's what the battery's for) on the RAID card helps a lot in boosting real world performance.

Couple more questions: when a drive fails in RAID, do I get an alert through the OS? How does that work?
It depends on how the system is engineered. The cheapest and most common method is to use monitoring software on the computer itself to query the HBA and handle alerts of drive failures, as well as S.M.A.R.T. information that can predict failures before they happen. S.M.A.R.T. is one reason why I am a lot less concerned about having a HD drop dead out of the blue any more.

What happens next depends on what your hardware supports and/or what you want. Back in olden times, Compaq servers had a serial port that could be hooked up to a modem that could dial up a pager or another computer that would then notify someone of the problem. These days there are more sophisticated hardware-based monitoring cards ranging from proprietary designs by the manufacturer to open standards-based IPMI hardware.

You're stuck with whatever your hardware manufacturer offers (or not). Although IPMI is an open spec., there are no universal IPMI cards that work on anything. Most of these have an Ethernet interface that offer a wide variety of ways to alert the operator, form pager to text message to e-mail to pretty much anything that you can make.

For one system I used an old cellphone taped to the top of the rack, and connected to a serial port on our serial console server. (These were Sun machines.) The console server could send SMS messages through the cellphone to the phones of the technical and management staff, and it could accept incoming calls and connect to any machine. Since the cellphone had its own battery, the whole data center could be blacked out and I'd still have communications.

The hardware method is useful because it can notify you even if the OS has stopped running, which can happen with a severe drive failure. The method that I prefer is the SMS tool that comes with my OpenSUSE distribution. I write my own scripts as glue between my RAID card software and the SMS script.

I'm pretty sure that IBM has its own proprietary system management cards. If yours has one, great. If not, you might balk at the retail price of one, compared to the free price of the machine.

Also, how does one back up 4+TB of data (without a second NAS)?
Tape backup is the most common method in the data center. A tape drive or carousel with the capacity to back up all of your data in a reasonable amount of time isn't cheap, but as you're going to find out, there's a lot more to a complete system than the computer.

Because high capacity tape drives are so costly, and because most home backup products now use an inexpensive albeit slow external HD, I supplement my own cross-backup product with inexpensive external drives or arrays. I've been using a 2TB USB drive to store a duplicate of the 2TB data partition on my Windows workstation that runs the software that turns my TiVo recordings into unencrypted MPEG files. I use an inexpensive ($1K) Netgear NAS appliance to store a duplicate of the contents of my file server and my Linux workstation, and an IEEE-1394B connected external HD to backup my Linux workstation.

I'm not going to go into the details about my own cross-backup product because that's how I put food on the table. ;) I will say that it makes the semi-portable USB and IEEE-1394 redundant to the point that when I'm on the road I carry one or more of them for convenient access to my data without being bound to a slow Internet connection.
 
You really know your stuff! Geewhiz.

RAID5 definitely sounds like the way to go. I do like the idea of having an extra 2TB of storage vs the mirrored pairs.

The main reason I have the server shutoff during the day is to save electricity. I can have it run 24/7 though and see how that effects the electric bill. Far as spinng the drives down, I can understand and appreciate what you're saying there.

I already have the OS on a single hard drive. I had planned for 4 additional drives, 2TB each, to be the main storage for the movies and what ever else. The thorouhput of the array is not as important to me. As long as data can be read as fast or faster as a single drive, then it will work fine. I will stream the movies over the 1000base-t network to the media PC using a SMB share.

I didn't realize tape backup was still so widely used. I'll have to explore a multi-drive external backup for this. The room the server is in is not kept at 50*, but it's room temp. The door is kept closed so dust is at a minimum and the single drive in there now does not get overly warm. Airflow over the drive(s) is good.

For failure detection, I'll need to get into the RAID controller and see what options are there. The server does have a web-based BIOS interface called RSA-II that I have configured and can access over the network. I didn't see anything in there about drives though, so that might be handled through the RAID controller.

I'm still trying to figure out how to get F@H to start on boot, but I can start it manually and it runs fine. If I end up leaving this server running all the time, starting it at boot may become moot anyway.

Sorry for the all-over-the-place post. I'm all frazzled today! And yes, the company scrapped many of these IBM servers and other models as well, all of them runners. Makes me scratch my head, but heck, I didn't scratch it too long! LOL
 
You really know your stuff! Geewhiz.
Yeah...well...that what happens when you get a BSEE, then spend your whole professional life doing systems engineering. You get old and tired and full of all sorts of information that is helpful once in a while. :)

RAID5 definitely sounds like the way to go. I do like the idea of having an extra 2TB of storage vs the mirrored pairs.
You can never have enough storage, and RAID5 is still the happy medium IMO.

The main reason I have the server shutoff during the day is to save electricity. I can have it run 24/7 though and see how that effects the electric bill. Far as spinng the drives down, I can understand and appreciate what you're saying there.
Speaking as someone who held the record for the highest monthly power bill in my 2000 unit apartment building for 3 years straight, I know about what running big servers can cost. (I was also the only unit that had the A/C running in the winter, BTW. :D) You don't need to go to extremes either way. If you're not using it 24/7 by all means turn it off when it's not in use. Just be sure to avoid "green" drives, and make sure what you get isn't set to spin down as a power saving measure. Not only will that make the drives grow old faster, it will wreak havoc on the RAID array to have sleeping drives that aren't responding in time.

I already have the OS on a single hard drive. I had planned for 4 additional drives, 2TB each, to be the main storage for the movies and what ever else. The thorouhput of the array is not as important to me. As long as data can be read as fast or faster as a single drive, then it will work fine. I will stream the movies over the 1000base-t network to the media PC using a SMB share.
Sounds good. I've been meaning to try putting the OS on SSD drives on my Linux machines (a 30GB SSD is enough for a full OpenSUSE install) to see if it makes the systems more responsive. Having it that way now leaves that open for you if you want to. Or you can add a drive for RAID1 protection. With Linux I don't worry about the OS as much because if something happens I can do a clean install and copy a few /etc files and be back in business quickly.

I didn't realize tape backup was still so widely used. I'll have to explore a multi-drive external backup for this.
They're still used in data centers, which is why they're so expensive. The pro stuff has always been expensive. Cheap disks are a good alternative.

The room the server is in is not kept at 50*, but it's room temp. The door is kept closed so dust is at a minimum and the single drive in there now does not get overly warm. Airflow over the drive(s) is good.
You don't need to worry about overheating. What I was talking about was having the temperature going up and down as the drives went on and off. You don't want that. Having the drives in their hotplug carriers means that you don't need to worry about cooling.

For failure detection, I'll need to get into the RAID controller and see what options are there. The server does have a web-based BIOS interface called RSA-II that I have configured and can access over the network. I didn't see anything in there about drives though, so that might be handled through the RAID controller.
You're on your own with that part. The only experience with IBM stuff was 10 years ago, and I wasn't responsible for any of the hardware, so I'm no help there.

I'm still trying to figure out how to get F@H to start on boot, but I can start it manually and it runs fine. If I end up leaving this server running all the time, starting it at boot may become moot anyway.
Have you tried stealing an rc.d script to start it? You should be able to find one that isn't overly complex and redo it with your stuff. Just don't forget to enable it at runlevel 3 at the very least. 3 and 5 if you're running X. OpenSUSE has the scripts in /etc/rc.d and links to them in /etc/rc.d/rc0.d /etc/rc.d/rc1.d and so on. Instead of making the links by hand I use the runlevel manager in Yast. One of the main reasons why I switched to it.

If all else fails, you can start it from /etc/inittab. Anything you start from init will get restarted automatically by the kernel if you use the respawn argument. Just look at getty for an example. I'd use that as a temporary last resort though, just until you can get an rc script to manage it.
 
Yeah...well...that what happens when you get a BSEE, then spend your whole professional life doing systems engineering. You get old and tired and full of all sorts of information that is helpful once in a while. :)

Well, I envy you, sir. I've got some knowledge, but there's a whole lot I don't know. Doesn't mean I can't know it, just means I don't (yet). :)


Speaking as someone who held the record for the highest monthly power bill in my 2000 unit apartment building for 3 years straight, I know about what running big servers can cost. (I was also the only unit that had the A/C running in the winter, BTW. :D) You don't need to go to extremes either way. If you're not using it 24/7 by all means turn it off when it's not in use. Just be sure to avoid "green" drives, and make sure what you get isn't set to spin down as a power saving measure. Not only will that make the drives grow old faster, it will wreak havoc on the RAID array to have sleeping drives that aren't responding in time.
OK, sounds good. The drives I was thinking about are either Samsung or Hitachi drives, but I'll make sure they're not 'green' or power saving drives. I've had bad experiences with WD and Seagate.


Sounds good. I've been meaning to try putting the OS on SSD drives on my Linux machines (a 30GB SSD is enough for a full OpenSUSE install) to see if it makes the systems more responsive. Having it that way now leaves that open for you if you want to. Or you can add a drive for RAID1 protection. With Linux I don't worry about the OS as much because if something happens I can do a clean install and copy a few /etc files and be back in business quickly.
Exactly. And I can always backup my /home to the array and backup drives too.

Have you tried stealing an rc.d script to start it? You should be able to find one that isn't overly complex and redo it with your stuff. Just don't forget to enable it at runlevel 3 at the very least. 3 and 5 if you're running X. OpenSUSE has the scripts in /etc/rc.d and links to them in /etc/rc.d/rc0.d /etc/rc.d/rc1.d and so on. Instead of making the links by hand I use the runlevel manager in Yast. One of the main reasons why I switched to it.

If all else fails, you can start it from /etc/inittab. Anything you start from init will get restarted automatically by the kernel if you use the respawn argument. Just look at getty for an example. I'd use that as a temporary last resort though, just until you can get an rc script to manage it.
I have not tried an rc.d script yet. Been trying to get it working in /etc/init.d (per the instructions on F@H's own website) or through crontab. Neither seems to be giving results. The program/script itself that I want to run is located in /root/folding (because I installed it as root) and if I run /root/folding/fah manually it starts up and runs fine. My experience with rc.d is.....well.....none......but I'll do some searching and see what I come up with. I boot to runlevel 3 on the server but do have X installed (with XFCE) in case I need it. I usually use it with NX though instead of at the console.

Thanks! :)
 
Looks like you have a plan coming together now.

I have not tried an rc.d script yet. Been trying to get it working in /etc/init.d (per the instructions on F@H's own website) or through crontab. Neither seems to be giving results. The program/script itself that I want to run is located in /root/folding (because I installed it as root) and if I run /root/folding/fah manually it starts up and runs fine. My experience with rc.d is.....well.....none......but I'll do some searching and see what I come up with. I boot to runlevel 3 on the server but do have X installed (with XFCE) in case I need it. I usually use it with NX though instead of at the console.
FYI with SuSE, init.d and rc.d are kinda the same. One is linked to the other. It doesn't matter unless you have a script that relies on the PWD variable.

I was thinking of using a cron job to check to see if the process is still running and start it if it wasn't. But I can't imagine how you'd get it to start a process to run as a daemon. When cron exits, so do all of the child processes, so you can't append a `&' to keep it going. At the other extreme, if you have a script that never ends started by cron, you'll have one more instance of that script starting every time cron runs it. So you could end up flooding the machine with dozens of fah processes if you're not careful.

First of all I'd move the `folding' directory from `/root' to some place like `/opt' or `/usr/lib' and adjust the Linux SMP client as service to fit. You can use `/etc/init.d/after.local to put the line that starts it up in. Something like:

/opt/folding/fah6 -smp < /dev/null > /dev/null 2>&1 &

You'd have to make sure that the directories and files have the appropriate privileges, mainly read as well as execute. Running the command line from an rc.d script is done as root, so there's no need to so any su or sudo crap. It's been a while since I used the local script, so you'd better make sure it gets started at some runlevel on the way to 3. If it's not there, link the local file to /etc/rc3.d/S99after.local

The Linux SMP Client Installation Guide is crap, and riddled with errors or omissions. So you're going to need to marshal all of your BASH scripting know-how to parse out stuff that shouldn't be there and do things (like cd to `/etc/init.d' before doing all those echo commands) that weren't spelled out. If in doubt, substitute the entire path from the root directory for every directory and file.

To think that came from Stanford! :laugh:
 
Back
Top Bottom