Jump to content
Light-O-Rama Forums

LOR not Multi-Core Optimized?


Rhyph

Recommended Posts

I'm not sure about this, but in my experience being that I do work for a software development house (although I am not a developer or engineer), I think I have made a discovery.   Or not, as they may already be aware of this.  Sorry for the long post in advance, but I thought some details, and explanation on this may help if no one else, Light O Rama themselves.  

 

When my company used to have issues with multi-core CPUs first coming out many years ago now, we had a lot of performance problems out of our software.  Our recommended work around at the time was to tell our customers to go in to task manager, and set core affinity on our application to utilize only one core.   Mind you, our users were generally in the bracket of folks who are tech savvy and don't mind poking at various settings on their systems.  So I don't recommend the uninitiated try this.   I also don't know if it causes any ill-effects, so if you try it, it is at your own risk.

 

As my display has grown over the past couple years, I'm starting to approach the 50mb+ file size on my sequences.  My sequencing machine (which is also my main desktop) is by no means meager even to today's standards, but it is starting to show it's spec age.  It's a custom rig I built just shy of 2 years ago, when an AMD 1090T 6-Core processor was the top end, 16GB ram, 2 x 128gb SSD OS drives, 1TB slave/data drive, 2 x nVidia 285 GTX's, etc.   It's still quite speedy, boots in under 15 seconds to desktop from cold start running Windows 7 64-bit.  I do consider myself a power gamer, and my machine is set-up to watch for things, such as GPU, CPU and memory usage right where I can see it on one of my 3 screens on my desk.

 

What I have found is that there seems to be some sort of multi-core issue plaguing the LOR sequencer and the visualizer.  As I attempt to load any one of my growing sequences, I've noted that as they grow in size, LOR takes longer, and longer to load them.  The visualizer has also started to slow down, and its FPS is suffering badly.  At idle it's giving me in excess of 100FPS, and under run it's dropping as low as 8 FPS in some very busy sequences.  

 

While this is all happening, I've noted that my core utilization is wildly thrashing about.  This was the give-away to me, as its the same issue we saw out of our own software.  Now, I can't tell anyone how on earth you go about engineering software to fix this condition, but it is a problem.  My test to prove it out was to simply take a single, large sequence, and load it after starting the sequencer.  Then shut the sequencer down, and restart it with core affinity then set to a single core.  In several repeated tests I could take the load time of a given sequence down by as many as 30 seconds on some of my largest.  I also performed the same test while running the visualizer and found that I could get around a 10FPS gain under extreme load with its core affinity locked to a different, single core.

 

As many of us are using "modern" computing platforms, especially those of us who have very large displays, I think if LOR could nail this performance issue down and correct it, the sequencer in general will behave much better.  With core affinity locked, I have also seen almost all of my play stutter and lag disappear from the sequencer while playing massive sequences in general.  

 

I'm curious to know if others see the same results, and if LOR has any suggestions or input on the matter.  If you don't know how to set core affinity, I don't want to give that instruction here, because again I don't know the risks, all I can say is I've seen nothing but performance improvements, and no ill effects otherwise.

 

 

Link to comment
Share on other sites

I've seen the same result on my show computer which is used only for sequencing and running shows.  It takes quite a long time to load large sequences.  I think I have enough horsepower in the show computer, below are the specs.

 

I'm sure interested to see what will happen when I set core affinity to a single core tonight.

 

 

HP Pavilion HPE h8-1360t Desktop PC
• Windows 7 Home Premium 64
• 3rd Generation Intel® Core i7-3770 quad-core processor [3.4GHz, 8MB Shared Cache]
• 10GB DDR3-1333MHz SDRAM [3 DIMMs]
• 256GB Solid State Drive
• 500GB 7200 rpm SATA hard drive
• 1GB DDR3 AMD Radeon HD 7570 [HDMI, DVI, VGA via adapter]
• 600W Power supply

Link to comment
Share on other sites

I am not a computer geek by any stretch of the imagination but last night I noticed on my laptop that my 3rd core was up to 93% while loading a sequence but the other three were barely utilized at all.  While running none of the cores when over about 20% if that high.  I am only running about 3600 channels though over both E1.31 and the LOR network combined.

Link to comment
Share on other sites

I am not a computer geek by any stretch of the imagination but last night I noticed on my laptop that my 3rd core was up to 93% while loading a sequence but the other three were barely utilized at all.  While running none of the cores when over about 20% if that high.  I am only running about 3600 channels though over both E1.31 and the LOR network combined.

 

 

What I see is a (any given) core spike up to nearly 100% usage, and the thrashing that occurs, is that it doesn't stay on that core.  It will wildly spike 1 core to near 100% and then another, followed by another, etc, etc, almost at the rate of speed you just read this.   :blink:    That's where the actual issue is in my opinion.  A true multi-core, multi-threaded application's behavior would result in basically 4 or more of the cores all locking down at their highest rate of speed and load usage for the duration of the event.

 

 

Wondering what program or software are you using to monitor the core utilization?

 

I have been using the All CPU Meter and GPU Meter gadgets for years from these folks here:  http://addgadgets.com/

 

You'll also need to run a small application at start-up called PC meter to read core temps and such from them as well.

Link to comment
Share on other sites

I wonder if there is also benefit to turning off hyper threading? After all, if the processes occupying a hyper thread core pair are not designed to sync up and work together, they are basically just sharing one CPU core.

Link to comment
Share on other sites

I am at 18,500 channels. Loaded sequence, imported in channel config, this is the step that haunts me. It took 5 minutes 15 seconds on a particular sequence. Shut down sequence editor, restarted, set affinity to 1 core and loaded same sequence, imported channel config, it took 7 minutes and 47 seconds to load.

 

Not sure if i am doing something wrong, but I didnt see the same results as you did.


BTW, LOR has been asked several time to make the sequence editor 64 bit, unfortunately we are still waiting.

Link to comment
Share on other sites

I'd imagine LOR is swamped right now.

 

From what I recall in previous discussions the software is still 32bit. I don't know when/if it will go 64bit.

Link to comment
Share on other sites

  • 1 month later...
Guest
This topic is now closed to further replies.
×
×
  • Create New...