New Workshop! Lighting 3 | Advanced Off Camera Flash

Gear & Apps

Lightroom 4 and 5 Hardware Performance Test: Single vs Dual CPUs, Hard Drive Configurations, Previews and More!

By Pye Jirsa on August 5th 2013

Article Overview

In this article, we are going to test Lightroom 4 and Lightroom 5 performance and hardware configurations. Our goal, to figure out the optimal hardware configuration for Lightroom workstations while also testing the performance difference from Lightroom 4 to Lightroom 5.


Thanks to the awesome guys and gals at Newegg, we recently had the opportunity to co-build a high-end custom dual-Intel Xeon workstation for image and video post-production work. The new machine also gives us a chance to test it against our other high-performance Intel i7 machine to see whether Adobe Lightroom can take advantage of a dual-CPU computer as well as to test the difference in performance within Adobe Premiere.

Watch the Video of the Lightroom Performance Testing

Lightroom Hardware Test Early Conclusion

For those of you lazy folk that aren’t interested in reading all the details of our tests, here is the “early conclusion.”

Hard Drive & Catalog Setups
From our tests, we didn’t note a significant difference in hard drive and catalog configurations. As long as you are working off of an internal SSD or a high-speed mechanical hard drive, whether you put the catalog and cache on the operating system drive, or a secondary SSD/high-speed mechanical data drive, it really won’t make a difference in Lightroom performance.

However, we did conclude that you are always better off using the fastest hard drive, even if that drive is also the operating system drive.

Single vs Dual CPU
Adobe has never claimed within their recommended hardware specifications that having two CPU processors in a computer can improve the performance in Lightroom. Before the test, we suspected from past experience that a single CPU with a higher individual clock speed would be faster than dual CPUs with a comparable combined clock speed. However, we wanted to get hard evidence to support this assumption.

Based on our test with both Lightroom 4 and 5, there is indeed no performance gain from having two CPUs. In fact, our single core machine with a faster individual clock speed significantly beat out dual CPU setups with similar combined clock speeds.

To conclude, as of right now with Adobe Lightroom 4 and Adobe Lightroom 5, you will get the best performance out of a single high-speed processor. Overclocking is also a valid option for advanced users to boost Lightroom performance.

Lightroom 4 vs Lightroom 5 Performance
Adobe Lightroom 4 is indeed faster than Lightroom 5 in all tests. Lightroom 5 was slower across the board from around 10-20% in comparison to Lightroom 4 when rendering 1:1 Previews, exporting and image-to-image Develop Module lag.

Smart vs 1:1 Previews
Finally, we unfortunately discovered that using any combination of using Smart Preview and 1:1 Preview does not consistently result in faster performance. Meaning we couldn’t conclude that Rendering 1:1 Previews and Smart Previews together was actually faster than rendering 1:1 or Smart Previews alone.

For details on all testing procedures and results, please continue reading.

Test Machine Specifications: THOR vs. HULK

For our tests we are taking our Dual Xeon computer, dubbed HULK, and pitting it against our current performance king, THOR (and yes, we do like to name our high-end machines after Marvel Characters).

We suspect that because Lightroom has not been optimized for dual CPUs, the faster single Intel i7 processor in THOR will produce faster results when compared to the slower Dual Xeon setup in HULK. We will also be testing THOR in normal mode (CPU at 3.2Ghz) and overclocked mode (CPU at 4.3Ghz).

Here are the primary component specifications for each machine:

THOR Specifications


HULK Specifications


Lightroom Performance Testing Procedures

For all of our Lightroom tests, we created a single “Test Catalog” for Lightroom 4 and Lightroom 5 with the exact same Develop Preset applied to each image from the SLR Lounge Lightroom Presets. Our Test Catalogs for both Lightroom 4 and Lightroom 5 were uploaded unaltered to a single external drive, and then loaded from scratch onto each machine just prior to testing. Between all tests, we would clear any cache and previews to ensure accuracy.

Hard Drive Configuration Testing Procedures

There is a lot of speculation out there when it comes to to hard drive configurations in regards to the Lightroom catalog and cache. From our own experience, we haven’t noticed much of a difference regardless of the setup, so we wanted to put that theory to test.

With each computer and CPU configuration, we ran 4 different test setups for both Lightroom 4 and Lightroom 5 using different hard drive and catalog configurations. Here are the different catalog and cache configurations.

Config 1 – Catalog and Cache on OS HD
Config 2 – Catalog on OS HD, Cache on Secondary Data HD
Config 3 – Catalog and Cache on Secondary Data HD
Config 4 – Catalog on Secondary Data HD, and Cache on OS HD

Hard Drive Configuration Testing Results

From all of our configuration testing, we noted that there was less than a 1% difference in testing results regardless of the hard drive configurations. In addition, the fastest variant (by only 1%) was always to simply use the Operating System SSD in both machines both for Lightroom 4 and Lightroom 5.

For this reason, we are only using times from Configuration 1 in the all test result data below. We can conclude that as long as you are using an internal high-speed hard drive, the difference in performance is negligible. However, we should also note that Lightroom will always run slightly better when the catalog and cache is on the fastest drive available, even if that drive is also the operating system drive. Splitting the catalog and cache did not improve performance even when both drives were SSDs.

Test 1: Lightroom 4 1:1 Preview Rendering Results

Our first test consisted of simply measuring the amount of time it takes to render the 1:1 Previews for all catalog images within the Test Catalog. As expected, the faster processor speed of THOR in both normal and overclocked modes resulted in much faster times than the 2Ghz Dual CPU HULK. In fact, when overclocked the single-processor THOR is around 40% faster than the Dual Xeon HULK.

  • Dual Xeon HULK: 100.5 seconds
  • Normal THOR: 73.3 seconds
  • Overclocked THOR: 60.1 seconds

Test 2: Lightroom 4 Image Export

With the same develop presets applied to all images, we ran identical exports on all machines to test overall export times. Just like the 1:1 Rendering test, THOR in both normal and overclocked modes outperformed the Dual Xeon HULK machine.

  • Dual Xeon HULK: 109.6 seconds
  • Normal THOR: 80.7 seconds
  • Overclocked THOR: 66.1 seconds

Test 3: Lightroom 5 1:1 Preview Rendering

Afterwards, we moved to our Adobe Lightroom 5 testing. The graphics below will show Across the board, the 1:1 Rendering time on Lightroom 5 is indeed slower than Lightroom 4 regardless of CPU and hard drive/catalog configurations.


  • Dual Xeon HULK: 114.3 seconds
  • Normal THOR: 79.1 seconds
  • Overclocked THOR: 63.1 seconds

Test 4: Lightroom 5 Image Export

Lastly, our Lightroom 5 image export test results indicated a similar performance decline when compared to Lightroom 4. We also noted that there was no additional optimization allowing Lightroom 5 to better utilize Dual CPUs over Lightroom 4.


  • Dual Xeon HULK: 127.4 seconds
  • Normal THOR: 86.3 seconds
  • Overclocked THOR: 71.6 seconds

Single vs. Dual CPU Conclusion

Unfortunately, like Adobe Lightroom 4, Adobe Lightroom 5 is simply not configured to utilize dual CPUs.

There are many good reasons to upgrade to Lightroom 5, but outright speed is not one of them. Regardless your hardware configuration, going from Lightroom 4 to Lightroom 5 will result in marginally slower performance in both 1:1 Preview rendering and in exporting images.

Image-to-Image Develop Module Lag Testing

For each machine, CPU and hard-drive configuration we attempted to test the image-to-image lag that became noticeable with Adobe Lightroom 4, and now Lightroom 5.

We wanted to test Lightroom 5 to see if there were improvements, especially since Lightroom 5 now features 1:1 and Smart Previews. To test, we timed image-to-image Develop Module lag within each catalog after rendering the following previews for each machine/configuration:

Lightroom 4 – 1:1 Previews
Lightroom 5 – 1:1 Previews alone
Lightroom 5 – 1:1 Preview + Smart Previews
Lightroom 5 – Smart Preview alone

We ran each test 3 times, clearing all cache and preview data in-between test to verify test accuracy.

Image-to-Image Develop Module Lag Results

We had initially suspected that Lightroom 5, with it’s 1:1 and Smart Preview functionality, would be able to outperform Lightroom 4 in image-to-image Develop Module lag.

Unfortunately, not only did it not out perform Lightroom 4, the results were actually quite surprising. Overall, we found that Lightroom 5’s image to image lag was between 10-20% slower than Lightroom 4 regardless of the system, CPU, and hard drive configuration.

What was even more surprising was the immediate inconsistency in test results. Each test was run 3 times and at times, smart previews alone were quicker, at times 1:1 previews alone were quicker, and at other times the expected 1:1 + smart previews were the quickest. Either way, we were unable to generate consistent and usable numbers to determine Lightroom 5’s optimal preview settings configuration.

The only consistent factor we found was that despite rendering Smart and 1:1 Previews, Lightroom 5’s image-to-image Develop Module speed was at least 10% slower than Lightroom 4, and at some points it was up to 20% slower.


We had high hopes for Lightroom 5. We hoped that Adobe would give Lightroom 5 the ability to utilize additional system resources for greater speeds. What we found was that Lightroom 5 still under-utilizes system resources. On average, regardless of the system build, CPU, and CPU clock settings, Lightroom was only utilizing around 30-60% of the CPU. It was only during exporting that utilization reached 100%. We were hoping Lightroom 5 would feature some sort of pre-cache functionality that would enable Lightroom 5 to have better image-to-image Develop Module lag. Unfortunately, it is moderately slower in image-to-image lag as well.

This means that for Lightroom 4 and Lightroom 5 we are left with the same conclusion. Once you have an SSD to work off of, more than the recommended 4GB of ram, the only thing that is going to boost speeds is a faster single core overclocked processor.

We are going to test these machines in Adobe Premiere CC as well, and then we will be going back to Newegg to build out our dream machines for Adobe Lightroom still editing and Premiere video editing.

I hope you all enjoyed this article, feel free to comment and leave your suggestions below.

This site contains affiliate links to products. We may receive a commission for purchases made through these links, however, this does not impact accuracy or integrity of our content.

Founding Partner of Lin and Jirsa Photography and SLR Lounge.

Follow my updates on Facebook and my latest work on Instagram both under username @pyejirsa.

Q&A Discussions

Please or register to post a comment.

  1. Ben Young

    Late to the party, but this test didn’t tell me much.
    A more comparative test would be to run the tests on the dual xeon machine with both CPUs installed and then run them again on that same machine with just one CPU installed – with that being the only change to the hardware and no changes to the software.

    | |
  2. under

    If you think a Nintendo DS is a hoot, you’ll love one of these PC gaming laptops. This is not the laptop that you throw into your backpack and carry around campus.

    | |
  3. babar asghar

    Hi there
    a nice try to compare the performance of two versions of LR
    but i think its pointless in a way ur comparing two different softwares on two different machines
    there r so many factors to see n overall hardware configuration
    again u r confusing 2 CPUs with single n multi core infact they r two entirely different systems one being a single cpu n 2nd being dual cpu also optimizing a dual cpu machines is again a tricky thing ram latency n timings also have a crucial thing to play here
    u should have tried two softwares on same machines to ideally check their performance differences
    i see a professional photographer may not be a hardware expert too
    softwares r still not ideally optimized to take full benefit of quad n hexcore cpus what to talk about dual cpu setups!
    i would strongly suggest to run both LR4 n LR5 on THOR n then share their performance difference!
    ur this comparison honestly speaking is not a comparison of softwares but may be of machines!

    | |
  4. Azzitude

    the dual zeons will tear that thor to pieces in video rendering with that WS card you have, and if you have 2,3 or 4 in SLI it will Kill Thor.

    as for the problems in lr4/5 go into the bios and turn off Hyperthreading and retest, also if it were myself building that “Dream Machine” ( and I build for alot of photo/video people) I would MAX out the ram and quit using that cheap inferior stuff you’re using and goto G.Skill Ram2133 or better with low CAS times like 8-9.

    After you have say 128gb of ram then you would setup a say 64gb “ram-disk” and use it for all paging files, temp stuff and maybe even move your working files into it if they would fit and leave room for other things to still run ….. prob see at least a 50% increase in speed while working with files.

    the biggest bottleneck you have is the hard drives …even ssd drives cannot come close to a ram-disk for throughput or sheer speed .

    hope this helps , and you’re not missing alot with hyper-threading turned off because now the system can fully utilize the full bandwidth into the core instead of sharing it. this is why Gamers are not buying the I7 CPU’s, they opt for the I5 which is pure raw power when you have something that uses only a couple cores on a cpu.

    | |
  5. Abrown86

    I’m curious to see the GPU performance. I’ve always wondered which performs better. But they are both created, marketed, and bought with very different tasks in mind, making a direct head to head comparison difficult. When do you think you guys will get the GPU results?

    | |
  6. Steve

    Just found this article and love the information. I have often wondered the same things about my machine. I am assuming you had Hyperthreading turned on for both machines. Have you tried turning it off?

    I noticed my i7-3770K was not close to using all its resources for any LR actions until I turned HT off. I don’t know if programmers can optimize something for HT, but it seems to me that LR and HT don’t like each other. In fact, I can get faster exporting results with HT on if I start 2 export actions with 1/2 of the pictures in each.

    | |
  7. Eric

    So glad to see this review guys. I’m so tired of debugging and trying to improve performance with Lightroom. I’ve spent hours with Adobe support also.. Just bough LR5 last week, and yeah, nothing better from it. My problem might be a little bit different though, I also try all kind of configuration. Currently here’s what I have:

    Dell U2711 (2560×1440)
    Intel Core 2 Quad Processor Q9450 (12M Cache, 2.66 Ghz, 1333 Mhz FSB)
    2 x OCZ Vertex 3 120GB ssd in raid (catalog, Win7 x64 and LR5 are on it and some recent pictures, all the rest is on my NAS)
    XFX Radeon HD 6870 900M 1GB DDR5
    8 GB DDR3 ram

    I always have lag when switching from picture in develop mode, or while zooming ot spplying filters. Also all 4 cores always hit 100%. When I export lets say 200-300 pictures, all 4 cores are tapping 100% during 10minutes (or whatever the time it takes)

    Bottle neck here seems to be the CPU for sure.. but damn.. it’s really not that bad of a cpu how can it need more than that based on what they’re saying in recommended hardware.

    Anyways, I currently investiguate if the Quadro K2100 or K4100 could help or do I only need the best freakin CPU out there.. looking to change that not so old computer for something more recent and mobile.. Dell Precision M4800 or M6800 maybe?

    | |
    • Simon

      Reasonably solid review. At the high level, I would say if your budgets permit, having more CPUs/Cores will help Lightroom to be more responsive if you have multiple things going on at the same time (ie when you are multi-tasking).

      Now some comments on the tests:
      1. Tests for building out the 1:1/Smart Previews and the exports really measure the raw performance of the full imaging pipe without taking any meaningful shortcuts. As long as Adobe keeps adding new capabilities to the imaging pipe, it is very difficult to see significant improvement given the same hardware. However, the Adobe team spent a significant amount of efforts that improves the interactive editing performance during image adjustments, by using various techniques such as caching and shortcuts etc. It is this part that you will see meaningful improvements in LR5.

      2. As to the raw performance slowdown in LR5.0 over LR4.0, some of you might already noticed that LR5.0 uses about 50% less CPUs/cores over LR4.0. This is a deliberate choice. LR4.0 spawns up to the number of available logical cores of threads to do the work (it is over-taxing the machine); whereas LR 5.0 only spawns up to the number of physical cores of threads to do the same (or maybe slightly more) work. Customers can use the other half of the logical cores for other tasks.

      3. When the Smart Previews (or DNG fast load data) are present, Adobe’s internal automation tests shows a significant speed up (from the time of switch to the dismissal of the Loading bezel). Depends on how the test in this article measures the time, the results may differ. Consider the total amount of work needed to load the Smart Previews/DNG Fast Load Data and the full resolution negative, there are actually more work involved that take advantages of the Smart Previews etc (there are extra loads involved). But the user experience is quite different. LR5 is better in that it will present customers with a workable image previews sooner, although Lightroom actually does more work in the background.

      Hope this helps.

      | |
    • sean

      Simon, it’s #2 that has me infuriated. Why, when processing thousands of images, would I want to ‘use the other half of my CPU power for other tasks’?? Insane.

      | |
  8. Matt

    This article starts a necessary dialogue fueled by research. To further things along, I have a few questions for additional research after an intial observation: Comments about hardware futility and Lightroom seem to overlook the results here. Keep in mind that an over clocked 3930k was nearly twice as fast as dual Xeons! But more to the point, additional testing on systems with a heavy emphasis on clock speeds would answer most of the queries I pose.

    My questions concern the extent to which we can generalize a pattern from the test results posted here; namely, would a single-core be faster than an eight-core for any given clock speed? I would guess not in all cases.

    From the results of Pye’s very good experiment, we learned that one processor is better for Lightroom than two – okay. The next question is “which processor is optimal?” Perhaps there is a sweet spot for cores and frequency, but I will not pretend to know whether any certain processor strikes the right ballance (if there is a “right ballance” to begin with).

    For all we know, Lightroom may utilize significantly more of, say, the quad-core 3770 than the hex-core 3970, such that the 3770 may actually outperform the 3970 when it comes to Lightroom workflow. If so, that would be great for limiting costs! But who knows?

    Out another way, a fundamental assumption behind Pye’s test seems to be that going top-of-the line yields the best results, but bottlenecking stems largely from software issues, not just hardware. Maybe this top-of-the line assumption does not extend beyond the gaming/CS6 realm because of Lightroom’s software idiosyncrasies.

    The best testing solution I can think of would be collaboratively using a single catalogue. If anyone is interested, I could create a 500 image catalogue of innocuous images, runs a bunch of filters in random order, and post it for download on Google Drive or Drop Box. Participants could then submit various rendering times (e.g. 1:1 and export for the full catalogue) in addition to system specifications. If enough people did something like this, we’d learn a lot.

    | |
  9. Jesús C

    Doesn’t Lightroom taking any advantage of your multiple cores CPU? Does task manager show 50%, 25%, 13% CPU usage for lightroom.exe process? Just divide your catalog in 2, 4… parts when you have to export all photos. Start an ‘Export’ task for each part. All the cores will be performing 100% :)

    | |
  10. Cody

    Hey Pye thanks for the guide! I have been wondering about these things for a while now and after a bit of research, I have never found solid info like this! I just built a new system this week on the new i7 Haswell platform and it hasn’t shown a very significant performance upgrade from my Samsung series 7 Notebook.

    Here are my specs…
    i7 4770 OC to 4.4gHz
    16GB 1866 Kingston Hyper X Beast Ram
    250GB Kingston Hyper X SSD
    1TB WD Black HDD
    Nvidia GTX 760 OC to 1300mHz with 2GB Ram
    ASUS z87 Pro Board

    The system is blazing fast in every regard EXCEPT LR5 :( I even notice it does much better in PS but I never use PS! I really thought this system would make a huge difference but it hasn’t which is a major bummer! Then when I saw this video from you guys I got confirmation that LR doesn’t use all the resources it should!
    Have you guys sent this to Adobe? You guys are pretty well know and I wonder if this info was sent to them with all these comments if they would actually do something to upgrade the architecture of the program and how it uses resources. LR NEEDS to take advantage of more of the CPU and I think they need to take advantage of the GPU (as PS does).

    Thanks again for the info! Hopefully Adobe will listen to all the complaints over the years of slower performing “updates” to LR!


    | |
  11. Ronald Nyein Zaw Tan

    Hi Pye,

    Thank for you taking your to make this review. I bought LR 4, when the software was cheap. I thought, “HEY! Let’s add LR to my arsenal.” I have been a long time user of PhaseONE CaptureONE PRO (since with 3.7.8 PRO version). Would your readers find if helpful if you repeat the test, but using PhaseONE CaptureONE 7.13 (the latest stable version). I wanted to see how these results fair to C1PRO7.

    I don’t know if you know about C1PRO, but since version 6, they started using GPU by tapping into OpenCL processing when displaying images on-screen and during export.

    Thank you for your consideration. Needless to say, LR4 was an utter disappointment and I am not the only photographer. I remember reading and even participating in the official Adobe Feedback forum, where hundreds of people completed about the slowness and unresponsive behavior of Lightroom.

    Yours sincerely,


    | |
  12. Terence Kearns

    I just tried DarkTable for the first time. Seems pretty snappy on a fairly old iMac I have (which has a limit of 3GB of ram – despite 4GB installed).

    I don’t think I’ll use it professionally, but it could prove a thing or two about performance. I think it is also SQLite on the backend (will have to check).

    | |
  13. Terence Kearns


    I have a feeling that NOTHING, for all intents and purposes, will make Lightroom ‘fast’ if you have a big catalog. It’s one of those latency things which is minimally helped by faster clock speeds and wider processing pipelines. Not as long as it’s using a file based catalog. They should build the thing to use a server – even if the server is running locally. They used an open-source file-based SQL database (SQLite). They can certainly use an open-source file-based SQL server. It would be trivial to port the code since the SQL acts as an abstraction layer. You should then be able to run it on your SQL server of choice – Oracle, Mysql, MS Sql, PostgreSQL, IBM Db2 (ba ha ha).

    They should re-write the whole backend to persist XMP into an eXist server (Pure XML database). Mutha of God that thing is FASSST! screw SQL…

    Ha ha ha… I just looked at the graphs and read the conclusion… “file handles (a feature of the kernel), Filesystem format (NTFS in most cases), sector size” This is what is holding lightroom back from utilising more CPU power and there is nothing you can do about it. A decent database server with the usage of BLOBs (binary large objects) will make ALL the difference in the world.

    Not to mention the side benefit of being able to support a multi-user environment more easily to support collaboration.

    Adobe… you’re doing it wrong…

    | |
    • Terence Kearns

      I should add that it is a useful test that you’ve done here because it proves the futility of hardware and it proves that it is a software architecture issue. They’re trying to solve an enterprise grade performance challenge using a micky mouse file system solution.

      How many database applications that demand high performance choose Microsoft Access with it’s MDB files over that manufacturer’s server-based product – MS Sql?

      What we have here is the same thing. Managing potentially hundreds of thousands of files with hundreds of thousands of entries and expecting to sift through your photos in real-time is an unrealistic ask with the current approach.

      What is not so easy to graph and benchmark is the simple task of having to go through your 4000 RAW files from the wedding you just shot, weed out the duds and select the pearlers. This is where the performance is needed the most, and it is where it is lacking. Yes you benchmarked preview creation, but that’s not the only thing slowing down the work task of having to sift… We know photomechanic is better for this, but that’s not the point – especially when there are better options that are possible for having the end-to-end workflow handled in one purpose-built tool. Lightroom is designed for sorting, so it needs to set itself the task of performing this function without getting in the user’s way by making them wait needlessly.

      | |
    • Terence Kearns

      “file-based SQL server” is meant to be “NON-file-based SQL server”

      | |
    • Terence Kearns

      Previews and everything else should be generated by the GPU and then sent to the RDBMS (as BLOBs) to be persisted.

      I’m pretty sure Bibble (now known as Corel Aftershots Pro) and Capture One also use SQLite file system approach. Aftershots is also unusable until previews are done. I have no experience with Capture ONE.

      As a photographer, the only thing I expect to wait for is the copying of the RAW files – everything else SHOULD be almost instantaneous.

      | |
  14. Dave

    If I understand this correctly and I were shopping for a new apple laptop, a 13″ 3.0 GHz dual-core i7 Macbook pro (Retina) would be faster running lightroom than a 15″ 2.8 GHz quad-core i7?
    That would save about $900 and be a smaller machine to take on the road!

    | |
  15. Chris Alleyne-Chin

    Thanks Pye. Glad to see this thoughtful comparison. I knew up front that this was the least impressive version of Lightroom ever. I jumped on to LR5 when it came out, mostly because the radial gradients are useful to me., but that’s really the only advantage I’ve found. The new freeform spot removal tool is far too limited (can’t edit an existing selection). As you pointed out, Smart Previews don’t help performance as they should. I guess if I was working from an external drive it might be useful.

    Image to image lag is really frustrating. I can’t imagine how bad it must be with a D800 or medium format sized file. I wish they had addressed some REAL issues, not waste time with things like a new alignment tool.

    They could have addressed our issues with performance patches but so far no word.

    | |
  16. Jim W

    Interesting analysis, although my own testing doesn’t reach the same conclusions regarding the 1:1 Preview rendering and Export tests. I’ve been comparing timings (using the same set of data) since back in LR3, which I agree is naturally faster than either LR4 or LR5. However, one issue to consider is Hyper-Threading, which you don’t mention in your analysis but which can definitely influence the results.

    Up to and including LR4.2, HT offered a small advantage (but only maybe 5% or less) over disabled-HT tests, though this seemed to reverse with the LR4.3 release. After that it was generally quicker with HT disabled (5-10% slower with HT enabled). I assume you had the HT setting the same for your THOR versus HULK tests, and which versions of LR4 and 5 were you using?

    Using the LR 5.2 Release Candidate I see slightly faster times (both preview rendering and exporting) than using the LR5 Beta and LR4.4 (both with and without HT). Also, when comparing LR5.2RC against LR4.2, I find the LR5.2RC is faster that LR4.2 when HT is disabled, but when it’s enabled the LR5.2RC is still slightly faster than LR4.2 for preview rendering, but slightly slower when exporting.

    Bottom line for me: the LR5.2RC, with HT disabled, is now approaching the same speeds as LR3.6, particularly in 1:1 Preview rendering, and outperforms any of the LR4 versions.

    Turning to your “Image to Image Develop Lag” tests, I’d be very interested to know how you measured that, and also WHAT you measured. Loading an image into Develop is a multi-stage process, with the initial preview reading being the shortest and therefore least consequential stage in the process. Next is the reading of either Smart Preview, Fast Load Data or ACR cache (depending on which of these exist, but an existing Smart Preview would be used in preference). That releases the sliders, and the final (longest) stage is processing the actual raw data. What were you measuring?

    | |
  17. Jeffrey Kuo

    Hey Pye, great benchmark and performance test! I may just have not seen it, but I didn’t hear or read any mention of how many images were in the catalog and from what cameras (i.e. what megapixel were the images?) did they come from? It’ll help put our custom-built systems into perspective to compare with these beastly rigs.

    The catalog is the most important part of the Lightroom workflow to put on a fast SSD because the standard previews sit with the catalog (1:1’s sit in the global LR cache) and those thousands of tiny files benefit from better 4K/tiny-file performance on SSDs.

    With 5D Mark III files my custom rig still feels sluggish:
    Intel i2500k @ 4.5GHz
    16GB DDR3 RAM
    Samsung 830 SSD for OS (overprovisioned to keep performance up)
    Sandisk Extreme SSD for cache, catalog, images (always kept at least 10% free space)
    Plenty of other mechanical drives for archiving finished projects

    | |
    • Jim W

      Slight error there….1:1 previews sit in the same preview cache as the standard previews. Not in the “global LR Cache”.

      | |
  18. Petr Klapper

    Apples and oranges. Sadly, useless test. It’s always about number of physical cores and especially their maximum speed with Lightroom. From experience, LR is able to utilize more cores, but some of the operations (like heavy lens correction) are still not fully optimized for more than 2 (or even 1 cpu core), so it slows down the processing no matter how many cores/cpus you have. In other words even 16core HULK with slower speed can’t compete with smaller number of cores with much greater speed. In conclusion, get the fastest 4 (to 6) core machine possible, i7-4770k ftw for now I guess, the 6core sandy/ivy E might be sometimes a bit faster, but probably not worth the price (same for fastest Xeons, not the ones used in the test).

    | |
    • babar asghar

      ideal comparison of two softwares was on same machine not on two different machines!

      | |
  19. JorjeC

    Awesome article as always Pye & Friends but I felt the need to nerd out a bit :P
    I think you REALLY missed the mark in calling that machine the Hulk but NOT making it a point to light it with green. BUT you can still fix this error and call it Red Hulk from now on as there IS a Red Hulk. maybe later if you build a big-specs beast again, you can light that one right and THAT can be TRUE Hulk
    end rant :P

    Thank you for all the awesome articles and website!

    | |
  20. Pierre Desjardins

    I would have been interesting to compare your testing machine versus a similarly built Mac. I have a MacPro 8Core with 10GO RAM and a three disks in a raid array and I find that some tasks in Lightroom are rather slower than expected, especially in the web module.

    | |
  21. Joao

    The fact that Lightroom is not optimize dor multiple CPU (is it optimized for multiple core or even multiple threads?) is bafling to me.. I always regarded Adobe as a company that does things right and not push products out the door because of the yearly cycle.. apparently I was wrong and maybe that will change with CC.
    I’m on LR 5 and process tons of photos and a decent machine. The fact that I could save time with optimization on Adobe’s part makes me a bit mad… only offset due to the fact that LR is a really cool product.

    On another note, I’ve also noticed that when doing work in the Develop module it matters the order in which you do things.. it’s much faster to first tweak the colors, then the local adjustments and leave sharpening, noise reduction and (especially) lens correction for last. Anyone else noticed this?
    I think it’s because if you first do lens correction and then apply, say, spot healing, it first needs to do a bunch of calculation before applying the healing brush.. noticeable difference.

    | |
    • Lisa Mirante

      Joao, I cannot tell you how much I now love you! Your “on another note” is exactly the problem I was having and stupid me didn’t put 2 and 2 together. The spot healing became unbearably slow. I just did a quick test with 2 virtual copies, reset each, spot healed one before lens correction/sharpening and the other after and the difference is night and day.

      I have almost exactly the Thor build w/o overclocking with a 670 video card and Crucial Ballistic instead of Mushkin memory. Everything else is the same. I could not figure out why the performance sucked so bad on this machine with basic editing. Now I know. Again, thanks, and I love you!

      | |
    • Marcel Samson

      Would you mind submitting these things to Adobe? LR5 is immensely buggy, the spot healing tool being one of the biggest bugs. A lot of people are complaining about unworkably slow response of the healing brush tool (me included). They have issued a RC patch, fixing a few other glaring problems, but the healing brush is not of them, unfortunately. It can be so slow and unresponsive, that it’s sometimes easier to switch to Photoshop to fix stuff, which isn’t my preferred method, with the edits not being nondestructive and such…

      | |
  22. RHWeiner

    I’ve run LR since LR1.0 (actually before that if you consider Raw Shooter a precursor) and I agree that LR3 was WAY faster than 4 or 5. But sometimes it’s not about an increase or decrease in speed but what has been added for features. LR is still not quite there if you take into account the number of times you have to pop out and do work on an image in PS. I spend a pretty penny (errr..many dollars) to maintain my current level of PS upgrading only to find that I am really only using it for maybe 5% of my editing…and even that is a stretch. So what I would like to see included are those image-centric aspects of PS incorporated in LR. It may cost more in the long run but why spend the cash on a program you hardly ever use, if ever?

    | |
  23. jim

    Hi, in your experience, What would the main reasons for a CR2 file corrupting in LR. Files after they have been down loaded.

    | |
  24. John

    I recently built a computer specifically because of how slow LR was on my old AMD quad core. Here’s what I built…

    Intel I-7 4770K water cooled with Corsair radiator
    Gigabyte Z87X-UD4H Motherboard
    32gb Crucial Ballistics RAM
    No graphics card
    2- 250GB SSDs in Raid 0 (Samsung 840s) for OS and working drives. They act as a 2X fast 500GB drive.
    2- 3TB Western Digital REDs in Raid 1 for data storage and redundancy.
    External HHD slot for easy backup or whatever.
    Windows 8 Pro

    Total cost about $1750

    I must say when I first started using it I was BLOWN away. Unreal performance with LR4 (I’ve since upgraded to 5.2). However, I started noticing some lagging once I imported my whole catalog. Still way ahead of where I was though. That’s one area where I need help is knowing how to best manage Catalogs (I use only one) and the Cache etc.

    | |
    • Joey D

      Hi John, couple of things

      – You might read a little more real work info on how RAID 0 works. It isn’t twice as fast, in any condition, it might be a little faster IF 1: it’s not your OS drive and 2: if you are using a DEDICATED RAID controller, if you didn’t buy an add on card chances are you are using a hardware level software RAID (yes that’s a real thing). And the performance is nominal at best.

      – Adding a add on dedicated video card could help with the misc lag you are seeing. Your setup assuming configured correctly should yield decent results. Yes it’s true LR doesn’t send information to be calculated in the GPU but that is part of the point. When you have a non dedicated video setup the CPU has to perform all of the display “rendering” which means if the system if loaded down has to tell all other processes to stop (CPU Interupt) in order to display everything correctly and quickly. Adding a card would free up the system from this switching.

      Also, if you haven’t done so make sure you update the firmware on the SSDs.

      | |
  25. Sumit

    How about testing on Mac OS? I’ve personally noticed Mac OS running with less specs, is generally faster than Windows (though I was testing with Windows 7 at the time). Im not sure why, but perhaps the architecture?

    | |
  26. Juan

    I’m in the market to do a low budget upgrade my 4 year old iMac and was thinking about the mac mini…. based on your article, CPU speed wont enough… should I go with iMac again?? (top configuration mini=2.6GHz, iMac=3.4GHz) or should I save for a Mac Pro instead?

    | |
    • Pye

      If speed is a concern, Mac Pro is the way to go.

      | |
  27. JL Garcia

    I believe that Ligtroom 3 is faster than Lightroom 4, so, version 5 is slower? Would you add Ligtroom 3 to your tests?
    Just for curiosity sake. :)

    | |
    • Pye

      JL, it is faster. We did an LR3 vs LR4 test a while back. Ever since Process 2012 in Lightroom 4, we have seen a significant reduction in speed. Mainly because there is far more processing functionality, without any optimization and change to the code allowing Lightroom 4 and 5 to utilize more CPU and system resources. So what you have a is a program that has boosted processing capabilities, without the ability to use additional system resources to handle it.

      | |
  28. Rodrigo


    I noticed an error .. a big one. Single core are not the same as single CPU. An iCore family processor is a multi-core CPU, same as Xeon family of workstation grade processors. A regular desktop / laptop won’t accept multiple CPUs but can certainly use multi-core CPU like Intel’s i5, i7 families.

    Adobe Photoshop can take advantage of multiple threads, hence use all available processor cores, and even better, usa CUDA instructions so that video cards (high end such as nVIDIA quadro FX family) can make the editing experience truly remarkable …. That being said, Lightroom does not support multiple threads, so that is why a single CPU like i7 is better, but only because the clock is higher.

    Just thought I’d give my 2cents on the subject :)

    Good job Pye!

    | |
    • Pye

      Yeah, some others pointed out in the video I said single core. I meant single CPU, I just confused the verbiage as I was on my rant. All the verbiage in the article should be correct though. Yes, because Lightroom isn’t optimized to utilize a CPUs cores and available threads, single CPU clockspeed is all that matters.

      | |
    • Joel

      Pye, thanks for doing these tests. I think Rodrigo’s concern is valid and far deeper than CPU vs. Core. The number of cores per CPU is changing again (quad is becoming the new norm with six-, eight-, and twelve- cores standard on high-end CPUs) and your verbiage here is confusing. Yes, I think your overall conclusion that fewer, faster cores is better is still valid but the way you present the data and process is vague and can lead to a lot of false assumptions and misinformation.

      I understand that most photographers don’t want to mess with the hardware too much which is why talking about everything in cores (forget number of CPUs) vs clock speed would be SO much clearer and more useful. For example you don’t actually put how many cores are in each test system. Yes, I could look that up myself since you give the Intel part number but that adds to the confusion.

      | |
    • Marcel Samson

      As far as I know, it doesn’t matter to operating systems like Windows if you use 2 seperate dual cores CPUs, or one quad core CPU. Your computer will treat either as 8 seperate cores. So, while his use of words may have been a bit confusing, in the end it doesn’t really matter. Lightroom seems to only use a single thread, aka one of the cores your computer identifies, no matter if it’s a physically seperate core, one integrated on a multi core chip, or a virtual one, like Hyperthreading and similar systems.

      | |
    • Kieran

      Pye, I’m still confused about whether you’re referring to cores or CPUs and your line saying “single CPU clockspeed is all that matters” has confused me more. Unless I’m completely mistaken, the speed of multi-core CPUs is quoted per core. So a dual core 2 Ghz CPU has two 2 Ghz CPUs (4 Ghz in total, if not in practice), while a quad core 2 Ghz CPU has four 2 Ghz CPUs (8 Ghz in total, if not in practice). I’m now unsure whether there’s any benefit in having a quad core 2 Ghz CPU over a dual core 2 Ghz CPU, or whether I should just avoid having multi-CPU systems – which are anyway very expensive.

      | |
    • Christoph Malin

      Hi Rodrigo,

      while we did not test Photoshop here, I can replicate Pye’s experiences on a test of the new 6-Core MacPro vs other current (and some older) Mac Hardware. Pye, thanks for the test anyway. Maybe you have a look here: in timelapse we often work with up to 3500 images or more. Image-to-Image lag then gets a problem when you work on Keyframes in Lightroom with the LRTimelapse Workflow.

      I will refer to your test ASAP.


      | |
  29. Michael DRFilms

    Can’t wait for your Premiere CC tests! Be sure to do with and without Mercury engine.

    | |
  30. Magnus

    Id REALLY like LR to utilize dual or quad core processors… I recently upgraded from 3.6 to 5.0, and I must say: performace-wize, LR5 was a major letdown. 3.6 was actually notably faster than 5.0. I am using a quad-core setup OCed to 3.5 ghz, 10KRPM HDD and 8 GB ram, and editing in LR on this setup is a major pain in the ass. Not impressed by Adobe at all!

    | |
    • Pye

      Yep, LR3 was the fastest version of Lightroom. Things went south with Process 2012 since LR4.

      | |
  31. thomas

    I wonder if taking the Quadro K4000 and trying it with Thor X79 motherboard would make any improvement with Lightroom 5 ? and i would never be able to afford it but the Quadro k6000 should? one would think for the price be the ultimate ! Im looking to build a workstation next year and wondering if the Quadro K4000 would be a good fit for autocad photoshop art / rendering ?Im looking at the Z87 asus motherboards and haswell I7 4770k not sure would like to keep the price under $3,000 for my build ? thanks for the videos and information!

    | |
    • Pye

      At present, changing the graphics card will not make one bit of difference. Lightroom 4 and 5 have no graphics card acceleration whatsoever. If you are planning to use Autocad, and other 3D rendering software, then yes, a workstation card would be a good option, but don’t do it for Lightroom as it won’t make a single bit of difference there.

      | |
  32. Fred

    Nice review, Pye!

    I noticed that you switched to Crucial M4 SSDs from the PNY XLR8s in the original Newegg build.

    Any particular reason why?

    | |
    • Pye

      Hey Fred, that was a mistake. The correct components were PNY drives. I think I just got my specs mixed up from our other build. Thanks!

      | |
  33. yael

    Single Core CPU frequency is what drives Lightroom? Would it be interesting to try an AMD CPU to compare since they have higher frequency?
    To bad Adobe doesn’t handle multiple cores, I think it’s about time they work on that since they haven’t really added new features in the version 5.

    | |
    • Paulo098765

      Id reccommend you go with intel, amd is kmowm for budget oriented cpu’s and ghz does have to do with anything. A 4.6 ghz i5 3570k beats amd’s fx 8350 at 5ghz. I would recccomend also if your the market to wait for new cpu’s. Usually theres a 10-15% increase in performence and cost stays relatively the same +/- 10 bucks. I reccomend intel due to there overall value. Oh and make sure you get a k series. A non k series CANNOT overclock for that free performence. Trust me. Cheers, and no im not british.

      | |
    • Pye

      Paul is right. In general when it comes to high-performance and workstation applications, Intel is the way to go. AMD gives a good bang for the buck, but still lags in comparison.

      | |
    • Tomi

      Actually I was suspicious with AMD myself. Relied a lot on all those out-of-real-world benchmark tests.
      Finally I decided to go with the AMD FX 6300, 6 cores running on 3.5 GHz (16 gig ram, SSD etc) and Lightroom 5 is flying! No lag, ever. Previews, work in develop module, using several gradients and spot healing; working on 1000+ files per session… No lag. Export of 24 MP files is also swift.
      Thus I don’t care about benchmark test anymore, really.

      | |
    • Jens

      @TOMI well CPU’s (when talking general desktop CPU’s) are hardly any bottleneck today, and while Intel may have the overall edge on AMD, that doesn’t mean an AMD is totally a fail choice. They do meet up to some of the mid-ranged Intel CPU’s which are the most commonly sold, and then there is always the off-case applications/games etc where one brand outperforms the other like a mosquito on steroids.

      So AMD is still a solid choice, if your willing to pay more you can get more with Intel though, but you would have to jump to the upper end of their CPU’s instead…

      @YAEL CPU’s are not as simple as “more GHz means more Performance (pr core)”, AMD and Intel differs so much in architecture so there is so many more factors talking place. All sorts of fancy buzz words come to mind, IPC: Instructions per cycle, branch prediction, L1/L2/L3 cache, cache latency, lookahead and so on… (AMD also often comes with more cores)

      | |