New Workshop! Lighting 3 | Advanced Off Camera Flash

Gear & Apps

Synology DS2015xs | Our Favorite and the Most Affordable 10GbE NAS (Yeah…It’s Fast)

By Pye Jirsa on November 3rd 2015


Network Attached Storage is one of the biggest areas of question for photographers, and rightly so. Professional photographers and cinematographers generate massive amounts of media and managing all of the data can get downright cumbersome. We have tried pretty much every solution in the book, and in this article, we are going to talk to you about our favorite small business NAS solution, Synology drives.

Past Solutions

When it comes to storage and data management, you name it, and we have probably tried it.

Inexpensive Western Digital drives from Costco are great, but extremely cumbersome when it comes to building large volumes of redundant data.

We have tried RAID devices like DROBOs. Honestly, they were great from the side of a consumer built product. They were simple to setup and quite easy to use. The only problem is that their software/hardware RAID systems just aren’t quite fast enough for professional use. Moving media back and forth on a daily basis simply takes too long. But, still it’s a decent solution for consumers, hobbyist photographers, and even possibly the “single photographer” studio shooting 15-20 jobs a year.

Today, we shoot 300-350 weddings a year, hundreds of portrait sessions, maternity shoots, newborn shoots, commercial shoots, etc. In fact, each year we are shooting nearly 1,000 client commissions as a studio. This forced us down the road of building our own internal server rack that could handle our daily data transfer needs.

But, when we were a growing studio, shooting around 100 to 200 weddings a year, we relied on the Synology DS2015xs (8-bay system) as well as the Synology DS1515 (5-bay system) and various other Synology models. In this article, we are going to nerd out a bit and tell you why these were our favorite out of box NAS solution and why you should look into them as a professional photographer.

In particular, we are going to do a full review and give you all the specs on the Synology DS2015xs, an investment that has a lot of growth potential for successful and growing photographers and photography studios.

To help me with my review, our internal “IT God” Joseph Wu is going to be aiding me with some of the more technical tests. So big thanks to Joseph for all of his help with this review.

Synology DS2015xs Review

Synology has always been our go-to when it comes to Network storage. We still have quite a few in operation here at our studio, ranging from the smaller DS211 all the way up to the DS1813+. As mentioned in the introduction, more recently, we had to move our production media from several DS1813+ onto a custom 10GbE FreeNAS based server, as our media workloads were simply outgrowing bonded 1GbE connections. Thus, when Synology announced the DS2015xs, we were very excited as it is the most affordable 10GbE NAS to date, with the same Synology 8-bay form-factor that everyone is accustomed to.

Synology DS2015xsIt’s hard not to compare it to the many “enterprise” level offerings such as our own in-house developed 10GbE FreeNAS solution, or the other Synology rack stations. Needless to say, the DS2015xs has really blown us away with the performance per dollar it delivers.

Before we begin, we’d like to thank Marivel @ Synology USA in providing us with the review unit, along with Seagate for working with Synology to provide us with 8 of their latest Enterprise NAS drives.


Synology has kept the same industrial design in the last generations of their 5 and 8 bay Small – Medium Business NAS’s. They’ve made several improvements in the last few years. Rounding off the drive bay latches, along with some small refinements such as different drive activity indicators, and tool-less drive caddies. The design is definitely quite good; in fact, it is above average, which is why we are giving it a 4-star versus a 3-star rating in terms of design.

Regarding its external design aesthetics, it doesn’t quite get the 5-star designation that we would save for a sultry piece of Apple technology, or a sexy Tesla Model S (not that we review cars, but dang, I want that car). But, it’s quite good and features a simple design approach. Not much else to say here other than, “if it ain’t broke, don’t fix.”


Perhaps where it stands out the most in terms of design is the design of its Operating System used to control the device. It is bar-none the most user-friendly and powerful NAS operating system that we have used. This is really the part of the value you receive when purchasing a Synology NAS solution. No other brand really delivers an experience comparable. It is incredibly powerful, yet easy to use.

I’ve had a few qualms about Synology’s DSM in the version 3 & 4 days, which have since been addressed with v5. More recently DSM v5.2 has been released, and instead of boring you with our take on it, I’ll just throw a link to their press version here.

Here’s a couple quick pics of the beautiful and simple browser-based v5 DSM Operating System which you can use to control and tweak all of NAS settings.



When it comes to design, the main reason we gave it 4/5 stars is due to the fact that Synology kept it quite large in order to upgrade the memory or swap the fans. I really do wish that they improve upon this form-factor in their next generation Desktop NAS’s so one does not have to take apart half the unit to get to the memory. 4GB is simply not enough for NAS with such powerful features and performance.

So regarding its build design and upgrade-ability, we are giving it 3/5 stars and from the standpoint of the Operating System design and interface, we give the unit 5/5 stars, which averages to 4/5 total when it comes to the Design category.


As with any Synology product, it’s packed with features. The Synology DSM operating system has every bell & whistle you can possibly imagine. With the Synology package manager, you can load up on different software packages to play with. You can even install your own CMS or Mail server, along with different media servers available for use.

DS2015xs - package-manager

There is just too much to fully cover Synology’s DSM in one review, so we’ll save the time to go over the hardware features of the DS2015xs. The DS2015xs is packed with connectivity options.

You can add up to two USB 3.0 external drives as additional or backup storage. It also comes with an external expansion port that connects to a DX1215 12-bay drive expansion unit. So, in total, one Synology DS2015xs can control up to 20 drives.

DS2015xs-SFP-shotThis really sets Synology’s units apart as you can start with one 8-bay unit, and expand as you go. If you initialize your drives with Synology’s Hybrid RAID technology, you can expand up to 20 drives, one drive at a time, without needing to destroy all of your existing data. It’s quite impressive needless to say.

If you didn’t know already, it comes with 2x SFP+ 10GbE ports. No other NAS in this price range comes even close to offering built-in 10-gigabit connectivity. These are SFP+ based ports so that you can attach copper based SFP+ cables, or like we did, connect 2x short-range 10GbE Fiber transceivers to hook up to our switch.

I’m fully confident providing this unit a 5/5 stars for features. With the expandability options along with dual 10GbE SFP+ ports, it has every bell and whistle that anyone could ever need.



There’s not much to say here regarding the quality of the DS2015xs unit. It performs extremely well and is built like a tank. It has traveled between different studio locations, and there were absolutely no issues with the data located on the drives.

Quality and reliability was one of the primary reasons we originally looked into Synology several years ago, and on this front, they don’t disappoint in the slightest. In fact, even when you have internal hard drives that are in the beginning stages of “failure,” it will report it to you via visual blinking. In fact, you can even set it up to send you emails if it notices that a potential drive may need to be replaced soon!

For these reasons, it easily earns 5 out of 5 stars for Quality and reliability.


Finally…the moment we’ve all been waiting for. How does the DS2015xs perform? For starters, it does wonderfully. While it’s not the fastest storage unit, it certainly doesn’t underperform. We’re going to skip the single gigabit numbers as the DS2015xs can easily saturate 1-2 GbE connections without any issues.

For those that may not be as geeky as me, well just know that if you have a Gigabit network connection, Synology systems can easily serve up enough info to utilize your network speed and far more!

For starters, let’s discuss our testing methodology. We don’t use synthetic benchmark numbers. Come on now, this is SLR Lounge. We base all of our testing on real world situations.

Since in our studio we regularly transfer large events ranging from H.264 Media files to small image files that come from Lightroom catalogs, speed is a great deal for us. The quicker a transfer/backup finishes, the more time our data manger or post-producers can get started on a new event. As I mentioned, we process nearly 1,000 client commissions a year! Speed & reliability are very key for us.

We only work with 2 drive + redundancy which means we can sustain 2 or more drive failures before data is lost. This gives us plenty of time and redundancy to pop in a new drive if one drive begins to have issues.

So for testing, we will be using RAID-10 for the most optimum balance between performance and redundancy. RAID-10 should also provide close to the best results as there is minimal parity overhead as compared to SHR-2 or RAID-6 but not as fast as RAID-0. (Yeah, that was super geeky).


For this review, we’ve setup a custom test bench consisting of the following:

  • Synology DS2015xs w/ 8x Seagate ST5000VN0001 5TB drives in SHR-2
  • Juniper EX3300 switch for SFP+ termination on the Synology DS2015xs
  • Netgear XS712T uplinked to the EX3300 for 10Gbase-T termination
  • Promox on Dual E5-2670 acting as hosts for the 10GbE Windows VirtualMachines
  • 4x Crucial M4 512GB SSD’s in RAID-0 w/ RAM based file cache to prevent storage bottlenecks

Crystal Disk Mark

First, let’s start with Crystal Disk Mark numbers, which should be representative of best case performance under the specific testing workload.

Synology DX2015xs CDM

SLRLounge Wedding Photo/Cinema Benchmark

To test real-world performance of the Synology NAS, we’ve gathered a combination of images and video files to simulate what most photographers/cinematographers would use the NAS for. We’ve benchmarked several different scenarios such as single client accessing the NAS at once, or multiple clients at once.

Some NAS’s also behave slightly different when writing or reading from device, so we’ve made sure to specifically test and log the results

Single Client PC -> NAS

  • Average transfer rate: 359.54MB/s
  • Minimum transfer rate: 150.84MB/s
  • Maximum transfer rate: 613.64MB/s

Synology Bench 1

Single Client NAS -> PC

  • Average transfer rate: 370.55MB/s
  • Minimum transfer rate: 219.25MB/s
  • Maximum transfer rate: 505.30MB/s

Synology Bench 3

Dual Client PC -> NAS

  • Average transfer rate: 366.41MB/s
  • Minimum transfer rate: 104.85MB/s
  • Maximum transfer rate: 645.40MB/s

Synology Bench 2

Dual Client NAS -> PC

  • Average transfer rate: 412.36MB/s
  • Minimum transfer rate: 105.09MB/s
  • Maximum transfer rate: 533.90MB/s

Synology Bench 4

Remember, this is a NAS device; so all of your computers attached to your network can share the drive and benefit from its performance. Regarding performance, let me just put all of these numbers into perspective for those of you that are thinking of how other hard drives can stack up to this.

The first average number that we provide you is the average transfer speed based on the typical files transferred by photographers (images/videos). The second rating is a Crystal Disk Mark rating that measures “optimal” transfer speeds based on large sequential files.

Synology DS2015xs w/ 8x Seagate ST50000VN00001 5TB HDs
360MB/s real world average read/write times (Random small non-sequential files).
438MB/s read and 631MB/s write times (Crystal Mark/Large Sequential Files).

Western Digital Passport Drives (USB3)
60MB/s real world average read/write times (Random small non-sequential files).
110MB/s read and write times (Crystal Mark/Large Sequential Files).

Western Digital MyBook (4TB USB3)
90MB/s real world average read/write times (Random small non-sequential files).
134MB/s read and write times (Crystal Mark/Large Sequential Files).

The last Drobo we used was over five years ago, so we are going to defer to some numbers from Storage Review on the latest Drobo 5D/5N. From our experience, their numbers and the Drobo‘s lack of performance is similar to our experience with the devices as well. The software-based BeyondRAID system simply has too much overhead to compete with something like a Synology.

Drobo 5D/5N
21MB/s read and 143MB/s write (Random large-block transfers)
203MB/s read and 155/MBs write (Crystal Mark/Large 5GB Sequential Files)

Here’s another competing product from Storage Reviews:

LaCie 5big RAID (RAID 10)
154MB/s read and 170MB/s write (Blackmagic Disk Speed/Large Sequential Files)

The Synology DS2015xs and sister products outpace every other competing product we have tested, as well as products we have seen tested. It simply is one of the best-in-class NAS units, and a perfect unit for media professionals as it outpaces most products by anywhere from 200 – 500%. That’s huge!

To put it into perspective, a 100GB transfer of images from a photo shoot that would normally take 21 minutes on a regular external hard drive would take just over 4 minutes with the Synology NAS and a 10GbE network. Keep in mind, you will need to invest in a 10GbE network to get these types of speeds, but it is well worth it if you are a growing studio.

Now, I feel a little guilty for this part. Because even though Synology is blazing fast when it comes to performance, I think it may be worth giving the Synology a 4 stars out of 5 stars for Performance. We noticed that during benchmarking, it appears the Synology is capable of more than the numbers we see.

The problem is that the Synology is physically bottlenecked by the CPU speed because of the onboard ARM processor, as Windows CIFS/SMB is single threaded. What this means is that if Synology were able to have a unit with faster onboard CPUs, the performance could be better. That said, 360MB/s + read/write times is not slow either and it far exceeds other options on the market.

When you consider the outstanding performance, quality, design, and features, then the result is an outstanding value. The Synology DS2015xs can do everything you’d expect a NAS server to do, and more. For a small and growing studio that needs an entry level professional server, this is the one for you.



If you’re looking into NAS systems, and you want your investment to be worthwhile, then you should definitely consider the Synology DS2015xs. This is a NAS server that can grow with a small studio, and with all its features it’s an easily justifiable purchase. If you need something that is a little easier on the budget, keep in mind that you can get similar Synology performance in smaller NAS units. They have 2-bay units starting at $150 (DS214se) on B&H.

If you’d like to see a little more on how easy the Synology DS2015xs is to use, I’ve included a personal “Getting Started” tutorial at the bottom of this article which shows a brief overview of the Operating System procedures to setup the device.

I hope you enjoyed this review. If you’re interested in purchasing a Synology DS2015xs, then click on any of the links above, or right here. Have any questions about the Synology DS2015xs? Or maybe you want us to review something else? Let us know in the comments!

Getting Started With the Synology DS2015xs

Synology has made it quite simple to get the DS2015xs going. You simply pop in the hard-drives, connect your network cable, then give it some power. Once it’s booted, you will be able to use the Synology Assistant in order to locate the IP address of the DS2015xs.

By default, a Synology does not come with DSM installed; you have two main ways of installing DSM. You can do this via the web manager or using the Synology Assistant. We opted to go with the Web-Based installation.


After the Synology has been found, you can then proceed to your browser ( via the Synology IP Address ) to run through the DSM installation process.




Once it is fully installed, you will land at the login page. The default username is admin, with no password. After you’ve logged in, there’re some additional setup questionnaires. Make sure you don’t skip the step where you set your Synology Name, Username, and Password.



Afterward, you will be taken on a guided tour through the web interface.






Now you’re wondering, how does one connect to the Synology via Windows File Explorer or Apple Finder? If you opted out of installing DSM with its default SHR provisioning scheme, you will need to head into the Storage Manager to create a volume before setting up a Shared Folder. Volumes is the local file system of where your Shared Folders will belong on. Shared Folders allow you to transfer files over the network. You can create multiple Shared Folders on a single volume. We’ll walk you through that now.

Volume & Shared Folder Setup

Synology has quite a few different disk redundancy schemes. You have your options of traditional RAID 0/1/5/6/10 or Synology’s Hybrid RAID technology. Synology’s Hybrid RAID technology allows you to mix-n-match drive capacity, or expand the volume as you purchase additional drives. It also works with expansion bay units, so you do not need to create a new volume or destroy your existing volume in order to add additional space. Internally, we will be using RAID-10 as this will be the sole primary data store for some of our media projects at the studio. However, in the case of this review, we will opt for Synology Hybrid Raid with 2 drive protection setup as that is what we feel most photographers will be using, and should represent the lower denominator when it comes to performance.


Here you can select the number of drives that you want to use for this specific volume. You can create multiple volumes on one unit. However, only 1 volume can reside on a disk at a time. We’ll be using all of the disks for maximum performance and capacity with 2 disks for data protection.



Here the Synology asks if you wish to do a disk check as we’re formatting the drives. Due to the fact that we’ve already created a volume before on these drives with sector scanning enabled, we’ve opted not to chose the option this time around. It’s highly recommended to do this on a fresh set of drives as any potential bad-sectors that may have occurred from the factory will be mapped out in this process. It does add quite a bit of time before you can start using the Synology however.



After that is all done, you will then be able to create shared folders on your brand new raid protected volume. This is a very simple process. 1) Name the volume, 2) assign users/permissions to who you want to allow access to the volume.

All done!



This site contains affiliate links to products. We may receive a commission for purchases made through these links, however, this does not impact accuracy or integrity of our content.

Founding Partner of Lin and Jirsa Photography and SLR Lounge.

Follow my updates on Facebook and my latest work on Instagram both under username @pyejirsa.

Q&A Discussions

Please or register to post a comment.

  1. Anders Madsen

    Ah, OK.

    If you import to both disk systems at once you are well covered as far as your original images goes. Worst case scenario your LR edits will be lost if the RAID0 drive goes kaboom before CCC has been run (unless your catalog file is on a different, separately backed up drive).

    BitTorrent Sync is only usable for keeping folders in sync between two computers, not two drives on the same computer, so it probably is of no use to you.

    The only flaw I can find in your current setup is the human factor: If you delete a folder by accident, the deletion will be propagated throughout the disks. Do you keep an archive on a remote location for situations like that?

    | |
    • Ken Marcou

      Yeah, the catalog lives on the MacBook Pro so I can always work while disconnected using smart previews and the catalog is always backed up to cloud in dropbox as well. Then I run periodic LR back ups but they get saved back to the raid zero box and redundantly backed up over to that raid one box as well. So very little is ever together on one drive.

      BitTorrent Sync could be useful if I finally get a desktop comp, iMac. Been thinking of doing that instead of just finally getting a large screen to edit on. Although it sounds kind of like what dropbox already does. Will explore it some more though!

      It’s funny you bring up the archive. Because I’m using a Raid0 4TB as main working drive and a Raid1 8TB to clone the 4TB but a few months ago when I started getting close filling up the drives CCC was telling me the destination drive (Raid1 8TB) was full already. But I still had room on my main working 4TB 0 drive. So I was confused because the 2 boxes should be exact copies of each other. It was because of the archive Carbon Copy Cloner makes for pretty much exactly the human error you’re referring to! You can adjust how much it will do this. I had to basically delete the archive and have it stop archiving so I could fill up the 4TB drive and have it clone to the raid1 8TB. So if I stay with this route I think I will want to get a larger backup drive so I can increase the archive size a bit instead of getting rid of the archive function. I’m thinking that I may skip the RAID1 backup and just put backups on a standard drive. -Be a lot less money. Unless the RAID10 thing you mention looks good. Which I have to research. Never ending research! :/

      | |
    • Anders Madsen

      Sounds like you are well covered as far as data redundancy and backup, to be honest.

      BitTorrent Sync is a lot like Dropbox in many ways, except that there is no copy in the cloud – it synchronises data between your two computers in close to real time and that’s it.

      The upside is that a) it’s free, b) it’s cross-platform so you can sync between Mac, Windows and Linux with no issues (even some NAS systems support it directly) and c) personal, sensitive data never leave your own systems and cannot be snatched from the cloud.

      RAID10 is also known as RAID1+0 or RAID0+1, which means that you take a stripe with two (or more) disks and add a second stripe, making it a RAID1 with RAID0 stripes instead of just disks (in reality it’s a bit more complex under the hood, but you get the idea). You will have the speed of your usual striped RAID0 and the redundancy of a RAID1. And a wallet that is significantly lighter…

      | |
  2. Ken Marcou

    I have been doing RAID 0 so my working drive will be fast enough. FINALLY once I started using a striped HD was it finally a pleasant LR editing experience. But, I basically don’t “accept” that files are on my hard drive until that 0 box is cloned to the RAID1 drive. The only thing that lives on these are all images. So with every import it goes to both Raid units and nothing changes till the next import. So ON my desk images live at 3 places: the RAID0, and the RAID1 which is 2 copies. I do a lot of working now not connected to anything though and just using the smart previews. (the LR Cat lives on my laptop and is backed up to Dropbox)

    I’ll have to read more about BitTorrent Sync to see how it’s better/different than CCC.

    | |
  3. Anders Madsen

    RAID5 will suck almost regardless of the system: You write to one disk and have to wait for the parity data to be calculated and written on top of that.

    If anything, go with RAID10 instead of RAID0 since RAID0 is basically a disaster waiting to happen – lose one disk and the entire filesystem is gone. Alternatively use BitTorrent Sync instead of Carbon Copy if you can (disks need to be attached to different computers) so every file is synced on update.

    | |
  4. Ken Marcou

    These boxes connect to the router via gigabit cable, correct? I would access files on the box on computers in my office network connected to the router via wifi at the speeds you’re referring to?

    | |
    • Anders Madsen

      Nope – your wifi would be the limiting factor here.

      At gigabit (wired) network speeds, access is notably slower than e.g. SATA (directly attached to the motherboard on your computer) but nevertheless acceptable. Some computers may have two network interface cards which will bring you close to native SATA speed for a single disk, provided you are the only one accessing the NAS at the time (unless the NAS has 4 NIC’s).

      When using a wireless connection all bets are off: Distance to, and the environment around, the access point will influence greatly on transfer speeds. Even with the fastest and newest wireless you cannot expect speeds much above 100 Mbit/s (802.11ac, which should be good for gigabit in theory) in real world use – that is, about 1/10 of a good wired gigabit connection.

      | |
    • Ken Marcou

      ah, Thanks, Anders. That’s what I thought, though that the limiting factors you mention might have only applied when going wireless to actually go out onto the internet. In this case with the NAS, I would only be using the router for local networking, so i thought maybe it would work. Damn! Not quite there yet then! I guess I will stick with my thunderbolt drives for LR editing. I have been using a RAID 0 main hard drive backed up via Carbon Copy Cloner to a RAID 1 drive so the backup is a redundant backup. But I kind of want to move to just 1 RAID 5 box but the ones I’ve looked at don’t have great reviews.

      | |
  5. Joseph Wu

    I’ll have to chime in here. ( Also need to push for the new comment system we’ve been discussing to allow tagging & reply on nested comments. )

    All the Synology line up uses MD. Which is essentially Linux software raid. You can even take your full drives from a Synology and import it on Linux as long as you are using the non Synology hybrid RAID levels.

    Linux RAID is plenty fast, there are always scenarios where you would want hardware RAID, but I think that’s out of the scope for most photographers.

    The benefit of a hardware raid card is caching. Onboard cache like on the LSI-9266 or 9271 lines can give you 1-2GB of SUPER fast DDR3 cache that allows burst speeds of over 2GB/s.

    | |
    • Dave Haynie

      As mentioned, the latest Synology boxes run Intel quad core 64-bit Atom processors, which have their own 64-bit wide DDR3/1600 memory on dual channels (well, that’s the capability of the chips, not certain what Synology does with them). You’re talking slightly over 25GB/s throughput with that RAM, if they’re using both channels. Again, its up to their implementation, but from your PC’s point of view, this is a hardware RAID… it’s basically a whole second small PC devoted to that RAID. Maybe they’re not stuffing enough RAM in there, but even that would be surprising… you can only make systems to small these days, given the availability of chips. The minimum I can deliver on a single 64-bit DDR3 bus these days is 512GB.

      Their main bottleneck should be in the interface between your PC and the Synology box, or something they’re doing in software that could be improved (caching, read-ahead, etc).

      | |
  6. Pye

    Sorry, @Dave, @Anders and @Robert. I should have written that differently. You are all right, every NAS/RAID system is using software to separate/combine data during write/read. Both the Drobo/Synology have both software/hardware components. I simply meant that the hardware provided in a Drobo simply isn’t sufficient to deal with the massive amount of overhead that their software creates. Simply put, it’s a slow system. I should have simply stated that the Synology has better hardware and a far lower software OS overhead. The way it was originally written made it sound like one was only software vs only hardware RAID and that was incorrect. Thanks for the correction, the article has been updated.

    | |
    • Dave Haynie

      Yeah, agreed. The Drobos are currently using someone’s ARM processors.. they used Marvell’s in the early units, like my old Drobo 2. These were dual core ARMs, with the processing split between the RAID running on one core and Linux running on the other… pretty easy to bog down. The more recent run quad core ARMs, but I have yet to hear who’s… I’m guessing probably ARM Cortex A9 cores, since that’s what most folks are using in embedded systems these days. These max out at around 1GHz (the ones I’m designing with run at 800MHz).

      Synology is using multicore Intel Atoms in their systems, like the Atom C2538 which gets you quad core 64-bit Atom at 2.4GHz, faster and wider DDR3 memory than most of the ARM chips, etc. So yeah, you should feel the difference there.

      | |
  7. Will Pursell

    Wow that was a lot to take in. I just bought a qnap 451 and I’m pretty happy with it but i’m sure those faster write speeds of this drive would be bette. I have it set up with two 4tb hard drives in raid 1. I have 2 more laying around that i’m going to put in the additional bays. I see you guys are using raid 10. Is that what would be recommended for this 4 bay type of setup?

    | |
    • Joseph Wu

      Yes. for 4bay, I would do RAID-10. That will give you a stripe of 2 drives, + mirrors of the stripe.

      Essentially making the formula for space Drive*N(numberofdrives)/2

      | |
    • Dave Haynie

      I have RAID10 on my internal, Intel-based, runs-on-the-PC RAID, because the overhead for that is practically unmeasureable, versus a stand-alone drive. Writes will technically take twice as long, as they have to go to two drives rather than one, but most of that kind of I/O happens in parallel with CPU use anyway, so no big deal. And of course, reads are faster (my RAID10 does reads about 1/2 the speed of my SSD… not bad for HDDs).

      For a dedicated RAID, I favor a RAID5, as it offers essentially the same level of protection, but you do not require full redundancy to get there. But that’s not going to be fast if you’re making your PC do all the work… so leave it to a dedicated controller or external box,

      I’m using the proprietary Drobo at the moment, which basically scales based on the drives you have installed. If you build a 2-drive system, it’s essentially a RAID1, when you’re at four or more drives, it behaves more like RAID5 (or RAID6 in the “enterprise” versions… that allows for two drives to fail at the same time with no data loss).

      | |
  8. Leslie Troyer

    not mentioned is Cyber hardening. I think a lot of data is at more risk due to hacking over drive failure (assuming you use RAID and check for dead drives). With a small base are they really secure, or do they leave ports open for features most folks would never use (ftp server).

    | |
    • Joseph Wu

      By default everything is locked down. You can choose to enable the cloud features

      | |
  9. Anders Madsen

    I’ve been using Synology RS-series devices for quite some years (I was a systems administrator and a CTO at a webhosting company before being a photographer) and I can second the thoughts brought forward by both Pye and Robert Garfinkle.

    However, there are a couple of things:
    Although Pye states that software RAID is to slow for their needs, I’m 99% certain that the Synology devices all use software RAID, even their enterprise solutions. This is actually a benefit in regards to one of the requirements stated by Robert, since it ensures that you can get to your data, even if the entire device is fried and only the physical hard drives remains functional. To the best of my knowledge, the disks can be read by a Linux computer as long as it supports mdadm and ext4, which just about anything recent will do.

    And yeah, it’s a bit more complex than simply connecting the drives and firing up the computer, but at least it is possible. With a Drobo and its proprietary file system, you’re toast.

    BTW, contrary to what Robert states, RAID1 (or really RAID10 or RAID1+0 when you have more than two physical drives) actually IS the fastest way to store the data. It’s true that RAID0 technically is the fastest RAID configuration but it has absolutely NO place in a setup like this since it offers no data protection whatsoever – if ONE drive dies, ALL your data is gone.

    Robert states that he does not trust any physical drive above 2TB. I have never encountered any significant problems with drives above 2TB, even under very heavy load situations over several years, so while some drives may have had issues, they are definitely not common today.

    Robert advises a replacement of the drives ahead of time, well before the expected EOL. Although I’m definitely paranoid when it comes to data integrity, this one is a bit on the excessive side for me. Have spare drives stored close to the physical device and make sure you replace any failed drives promptly and you will be fine. When buying drives make sure that you buy enterprise grade drives (Robert mentions this too, and I fully agree).

    One thing that isn’t mentioned is the ability to backup your data directly to Amazon S3 or sync directly to another Synology device at another location. This is actually HUGE since backup is crucial and more so when it comes to these very large amounts of data.

    One thing that I really miss is the ability to use BitTorrent Sync with Synology (yes, it can be done but it is waaaay more complex than with e.g. QNAP). For those that do not know, BitTorrent can be used to keep a set of folders in sync between two different devices, and it can be done over the internet. I use it to keep my storage at the studio in sync with a Windows PC (equipped with plenty of drives) at my home, and compared to the rather pricey solution of having two separate Synology devices replicating data between them, the BitTorrent solution is both cheap and efficient.

    Lastly, for those running into similar problems with Windows networking drives being limited by the processing power in the Synology, the RS models (RackStation – these are not desktop devices) has way more power per core (important for single threaded services) and offers a wide array of different 10GbE network cards. Yes, they are twice as expensive as the DS2015xs but it’s the drives that are going to bankrupt you anyway… ;)

    For those not liking rack mounted devices, QNAP has a lot of different desktop models that can use third-party 10GbE network cards. QNAP has a slight edge when it comes to backups since it supports a number of different cloud storage solutions but I honestly don’t know if their performance can match that of the DS2015xs so please make your research before deciding. The processing power of e.g. QNAP TVS-EC880 is similar to that of the RS-series from Synology while pricing is somewhere between that of the DS and the RS series.

    | |
    • Martin Pernecký

      There is already a approved Bittorent Sync package in the repository. You can install it directly from Package Manager

      | |
    • Dave Haynie

      Every RAID is a software RAID. Period. The only question is where the software runs. If it’s not running on your host PC’s OS, you say “hardware RAID”… not because someone’s magically build a dedicated circuit that can manage a RAID file system, but because you’ve added another processor to deal with the task of managing the RAID. And maybe some buffer memory that doesn’t involve the host PC at all.

      This is of course why “hardware RAID” isn’t necessarily faster than “software RAID”. Case in point: my system. I have a built-in-the-box RAID10 running Intel’s software RAID (Intel apparently puts a few very low-level bells and whistles in their SATA controller to facilitate the things that RAID does, but it’s still software, running on PC, doing the work at the device driver level). And I’ve had two generations of Drobo boxes — they say “Beyond RAID”, but it’s basically just a flexible RAID.

      Ok, first generation box was pretty slow. You could blame the I/O connections, but going from USB 2.0 to Firewire 800, while it did improve performance, not by so much. It was limited by the host processor. The current one, on USB 3.0, is much faster, but still limited by the host processor, versus the speed of USB 3.0.

      | |
    • Dave Haynie

      If you can get the at the data from a Synology box on a standard Linux system, that’s not the result of “software'” vs. “hardware” RAID, that’s the result of using open source vs. proprietary RAID. And yeah, if my Drobo 3 dies, I’ll have to dust off the old Drobo 2 to get at the data… I can’t read it on a Linux PC.

      That also points out the fact that RAID != Archival. RAID is a way to get at lots of data in a pretty reliable way, with very low chance of downtime. But it is still possible to lose data… everything on every RAID should be backed or archived somewhere else. All my stuff is on backup HDDs, all the critical stuff (photos in particular) is on HTL Blu-ray discs in addition to HDDs and the RAIDs (all current photos and video live on both the in-system RAID and the Drobo; eventually, older photos get tossed off the smaller RAID).

      | |
    • Anders Madsen

      Dave, I know you know a lot about electronics, but you are in over your head to some extent here.

      There is definitely a very real difference between software RAID and hardware RAID (and the FakeRAID you mention from Intel, which is nothing like a proper hardware RAID).

      The most important difference is the way the RAID containers are created. With software RAID you basically need an OS that can run the same (or at least compatible) software that was used when the containers were created and you are good to go – your OS will recognize the containers and the volumes in them.

      When using FakeRAID and proper hardware RAID, you need drivers for the hardware for the OS you run, and – if the hardware RAID controller broke – you need a compatible hardware RAID controller as well.

      The software used to create the containers resides in the firmware on the RAID controller, and sometimes you can’t even take disks from one generation of hardware controllers and plug them into a newer generation of hardware controllers from the very same vendor (Dell, I’m looking a you and some of your PERC controllers) and have the system recognize the containers.

      Hence, as a minimum you need a hardware controller from the same vendor, and sometimes you need a specific model or a specific generation of controller from that vendor – otherwise your disks are just paper weights.

      That is why having software RAID inside Synology boxes (or QNAP for that matter) is very much a benefit compared to having a hardware controller if you need to get a hold of your data from a broken unit where only the disks are usable. No worries about finding another unit with a compatible RAID controller – you just plug your disks in and you are good to go. If you don’t want (or need) a new unit, you can attach the disks to a standard Linux box with a sufficient number of SATA controllers and work your way from there.

      As for speed, you are right about some implementations of hardware RAID (FakeRAID) being inferior to good software RAID, but proper hardware RAID will blow both software RAID and FakeRAID out of the water any day of the week, especially when it comes to write speed. The main thing here is on-board RAM on the controller, and the fact that it has a battery backup (note: I said “*proper* hardware RAID”) that will protect the contents of the RAM in case of a power loss.

      This means that you can use the RAM as a huge write cache which in turn means that the OS can push data into the controller at speeds way beyond those of software RAID as long as you are working with data sets smaller than the cache size – with software RAID you HAVE to wait for the data being written to the physical disk before reporting back to the OS that the data are stored.

      In case the OS needs the data before they are flushed from the cache, they can be read at equally high speeds, but that goes for software RAID as well since that maintains a cache in the computers ordinary RAM.

      So no, there is no magic involved, but there are some very real differences with respect to both data recovery and transfer speeds.

      And you are definitely right about RAID not being “Archive”, which is also why I’m so big on syncing with external storage one way or another (Martin, I was not aware of the approved package, but I have not checked for a while – thanks for the heads-up).

      | |
    • Dave Haynie

      Anders.. I think you missed my point. What you’re referring to as a “hardware RAID” is a dedicated controller with its own OS, usually someone’s RTOS, and most likely, a proprietary RAID structure. There’s probably a bunch of buffer memory on it, that CPU or SOC (the last one I took apart actually used a SPARC processor), and one or more HDD controllers. But the RAID is based on software — it’s just software that’s not running on your host CPU, as I said. The only “hardware” you have is a secondary computer system.. which is exactly the same functional thing you get if you build your own “software RAID” on a Linux box — as Synology apparently does — and dedicate that whole box to being a “hardware RAID”. The only reason that seems like a “software RAID” to anyone is that you can get the same RAID driver code and run it on any Linux box. But there’s no fundamental difference between that setup and your “dedicated hardware RAID card”, other than one being a whole second PC of some sort (Synology is doing exactly that, in fact, since at least some of their boxes are based on the Intel Atom series). Drobo is using ARM processors in theirs, far as I know, like just about everyone else…. designing embedded ARM systems is my day job these days. So yeah, I do know a bit about this stuff.

      Intel’s FakeRAID is just software RAID, but implemented in the BIOS and at the driver level, rather than WIndows striping/mirroring, which is implemented at the file system level. Really not much different the MD/RAID under Linux, other than being Intel proprietary. And there are other companies with their own flavors of this, I’m just picking on Intel because that’s the one in my PC’s BIOS.

      It’s a trade-off. The BIOS-based pure software RAID can work under any OS and can be used as a boot drive (though I don’t), but it’s locked to that family of hardware, unless they use a standard (no reason they couldn’t, but they usually don’t… or in Intel’s case, they’ll probably claim they are the standard). An OS managed RAID will work on any version of that OS. A “hardware” RAID — again, meaning that I have some other CPU and memory involved running the RAID software, whether in an external box or an internal card — that may or may not be dependent on that specific hardware, depending on the RAID implementation.

      Case in point: Synology vs. Drobo. Both sell me new hardware, both run the RAID protocol on that hardware, so in neither case is my PC doing any RAID work.. all I see is a DAS or NAS. So from the perspective of my PC, these are both hardware RAID system. But in either box, there’s a CPU subsystem, an OS (Linux, in both cases), and there’s a RAID software stack … from the perceptive of the box, it’s a software RAID in both cases. The fact that Synology’s is open source and will run on any Linux box, while the Drobo’s is closed source, doesn’t somehow make the Drobo any more or less of a “hardware” RAID. That’s the point I was trying to make here… all “hardware RAID” has ever meant is that the RAID stack is running on some other hardware, not on my PC.

      | |
  10. robert garfinkle

    I suppose workflow / speed is important. I am a technical guy, yet blew past all the figures and facts. I trust if PYE put up all this, he trusts it, impressed by it. cool.

    and this is just my humble take on things. I think the number one factor, to me, the most important, is content retention; followed by availability, and lastly, speed.

    Yet have to mention a concern of mine – that in the end, if the device breaks, can I get to content? I don’t want to know that “if” the device / system becomes unreachable due to problems – i.e. if the power supply goes out, or a raid controller goes into failure, is my content ultimately safe.

    I don’t put faith (a.k.a. money) into cheap drives, however, I have experienced even the best of drives, go into failure. so where does that leave me.

    1. I was told, and I trust this, that drives (the single physical device) in this day and age, which are larger than 2 TB have problems – I am not speaking about raid configurations which present themselves larger than 2TB, but the physical device itself. So, having said that, If I can group a series of drives which are 2TB or less, in a reliable raid configuration which employs redundancy and speed, cool.

    I have a server, still in use, that uses 3 500GB drives, still seems healthy after 4 years of use, drives great – see #3, I will be replacing all drives this year and retiring them.

    2. I have an itch about NAS and custom OS’s – where the underlying OS may have neat stuff, but not where the accessibility is concerned, is it in common format; I will pass on purchasing that type of system if it is unknown. I must, in the event of failure, have the ability to pull a drive (or drives) and know that I can get to my data using ANY other computer. It’s a must…

    I prefer, to setup my own server, no matter how large, with hardware raid, using a common Operating System, like windows (or linux) and ensure that the drives are formatted in a common NTFS (or similar) format, that if that machine goes down for ANY reason, I can pull the drives and get to the data…

    Note: Not all raid configurations are easily bustable, where a drive can be separated from the batch, and read – but, my preference is to use mirror (raid 1) though not the fastest by any stretch, but experience has shown that I can pull a drive, put it in an external enclosure, and read the data. done.

    3. Be ahead of the curve – on the anvils of preparedness, do not wait till something breaks. Plan to replace computers / drives regardless if the manufacturer states so. If a drive / system has an expected / advertised life expectancy of 5 years, replace everything in 3.5 to 4 years.

    If your in business, it is essential to buy additional “like” drives and have them around for replacement on the spot. Don’t get caught in the situation where you have downtime and searching for drives. If your system has 3 drives, buy 1 extra. If you have a 10 drive raid setup, buy 2 – 3 replacement drives.

    Buy the extended warranty, ask if the manufacturer has onsite repair.

    It is well advised, to purchase Enterprise level drives – they not only hold a longer warranty, but it’s proof that the manufacture will still be making that drive even a few years down the road – Seagate / Western Digital is notorious, in a good way, for having the same drives available should you need them. Western Digital Black Drives are good or their Raptor Devices – those are mainstay devices for that company. Seagate has it’s ES Constellation drives…

    4. Power / UPS – most important part of your system. Better than warranty, considered best insurance: Buy the most robust UPS / power conditioning you can. This not only ensures up-time, but in many cases provides a great layer of protection between your assets and the power company’s poor power grid.

    Protect your server (NAS), ensure your routers / modems, network switches are ALL on this power backbone. Therefore, if power goes down, but batteries are up / in use, you have time to safely finish work and prepare for shutdown, while being connected to your local area network.

    5. where possible – use wired devices vs. wireless in house, provides fastest means of workflow, getting stuff done….

    Finally, in no way shape or form am I knocking what PYE presents in this article. It appears solid, robust. I only offer my experience having been in the IT industry for so many years, and look to help others.

    Have a good day.

    | |
    • robert garfinkle

      Some afterthoughts – not mentioned above:

      Regarding “commonality” (i.e. ensuring a lowest common denominator relating to availability ) – one of the ideas behind having a common OS (i.e. windows server or Linux) is that simple file sharing enabled on the host is your best friend. Windows, specifically has CASF (Continuously Available File Sharing) and similar technologies which allow you to be connected 24/7 anywhere, anytime.

      So, aside from using a common OS, common format, simple raid, one may wish to in short, have your own cloud available for all your local and remote devices – so that when you are on the go, you can be directly connected to your files either to show clients or have the ability to easily work as if you were in the home / office.

      While I won’t dive into the complexities of these setups I will make a statement about “The trouble with 3rd party cloud” where a setup allowing you to be directly connected to yourself has it’s benefits. Cloud storage rides on the rails of being sold a convenience of availability, yet it also comes tethered with legalities and 3rd party fingers holding your assets in their hands without liability.

      for me, personally, if I can avoid having to worry about who retains rights to my intellectual property, who has rights to go through my personal content, or a 3rd party claiming they are not responsible if “something” happens, let alone if their servers go down – I’d be looking at personally owned and hosted systems from home / office, best bet.


      | |
    • Pye

      Very good points Robert, especially about using UPSs to support your critical systems. We follow all of your key points as well in the studio. Using UPSs, wired 10GBe connections, etc.

      Regarding the drive sizes, we use enterprise drives and really haven’t noticed anything worrisome about drives larger than 2TB. Between all of our systems, I think we have only swapped out 3 drives so far (out of a lot of drives). For systems that require the ultimate stability, you may have a point. But the productivity gains for us are too great to pass up. Our Synology notifies us of drive problems/impending failure far before the drive actually fails and we setup each system to be able to sustain 2 drive failures prior to data loss.

      For us, that’s been more than enough protection for our purposes. We benefit far greater from having substantially more storage space per NAS versus minor improvement in reliability on a system that’s already incredibly reliable and redundant. But, I know IT is a very finicky area, everyone is going to have their preferences.

      We have been using Synology units in the studio for years. Only recently have we moved to a full custom system. But, even then we still use our Synology devices within different departments and they are still fantastic.

      | |
    • robert garfinkle

      This sounds great PYE. I will look into it more. great info, and thanks.

      | |