<<
 
>>
 
 
justin = {main feed = { recent comments, search } , music , code , askjf , pubkey };
 

(retroactively posted Nov 2015)

Recordings:

long losta cola

Comment...

(retroactively posted Nov 2015)

Comment...

Music
June 2, 2015
there went the block
Music
May 25, 2015
the new angle
Music
May 12, 2015
and so returns the heat
Music
May 7, 2015
they think you are special
Music
May 4, 2015
go away
more taint
Music
May 1, 2015
not seventy eight again
drink 'n stencil
April 30, 2015

(retroactively posted Nov 2015)

Recordings:

last chance for the apes

Comment...

Music
April 23, 2015
an incoherent tooth
Music
April 21, 2015
in need of something
Music
April 15, 2015
smac reprise
Music
April 13, 2015
strong men also cry
Music
April 10, 2015
alternate form

(retroactively posted Nov 2015)

Comment...

(retroactively posted Nov 2015)

Comment...

(retroactively posted Nov 2015)



Recordings:

no fools flat

Comment...

My new mp3 player!
March 22, 2015
This week's distraction is my new minimal HTML5 audio player, which is about 8kb of HTML/CSS/JS (or 3kb gzipped), and available here. The HTML5 audio tag does all of the work, this just does a basic AJAX-fetched media library, searching, and playlisting. I made it for all (1500 or so) of my recordings, but it should be pretty reusable...

Yes, no visualization, but that'd be so 1990s...

Recordings:

not the odds

2 Comments
Music
March 16, 2015
all according to plan
imperial odds
Music
March 12, 2015
gotta have standards
Music
March 10, 2015
crumbling plaster
Music
March 9, 2015
meltage
Music
March 7, 2015
another day another toilet
fixing to leave
Music
March 4, 2015
lonely disappointed
Music
February 23, 2015
short fuse
Music
February 19, 2015
dark rooms
white tighty
Music
February 16, 2015
not having visions
Music
February 12, 2015
angles and snowflakes
Music
February 4, 2015
raccoon hunger
Music
January 21, 2015
slog
Music
January 14, 2015
alex_jason - 1 -- [9:04]
alex_jason - 2 -- [9:47]
Music
December 14, 2014
always more
Music
December 6, 2014
freeform jam with very and googleable and v
Music
November 22, 2014
a cold pillar
Music
November 18, 2014
bitter
kiss
Music
November 6, 2014
spheresofwet
EEL overkill
November 4, 2014


Comment...
Music
October 29, 2014
scheduling
First, from a recent 'git log' command: ..and to think, back when we used VSS we didn't even have commit messages! Soon after, "AudioChannel" became instantiable and went on to be known as "MediaTrack", and as one would hope many other things ended up changing.

Wow, 9 years have gone by.

I've been having a blast this week working on something that let me make this:

The interesting bit of this is not the contents of the video itself -- 3 hasty first-takes with drums, bass, and guitar, each with 2 cameras (a Canon 6D and a Contour Roam 2) -- but how it was put together.

I've spent much of the last week experimenting with improving the video features of REAPER, specifically adding support for fades and video processing. This is a ridiculously large can of worms to open, so I'm keeping it mostly contained in my office and studio.

Working on video features is reminding me of when I was first starting work on what would become REAPER: I was focused on doing things that I could use then and there for things I wanted to make. It is incredibly satisfying to work this way. So now, I'm doing it in a branch (thank you git), as it is useful for me, but so incredibly far from the usability standard that REAPER represents now (even if you argue that REAPER is poorly designed, it's still 100x better than what I've done this week). You can't go put half-baked, poor performing, completely-programmer-oriented video features into a 9 year old program.

The syntax has since been simplified a bit, but basically you have meta-video items which can combine other video items on the fly. So you can write new transitions or customize existing transitions while you work (which is something I love about JSFX).

I'm going to keep working on this, it might get there someday. Former Vegas fans, fear not, REAPER isn't going to become a video editor. I'm just going for a taste...

6 Comments

Music
October 27, 2014
transitioning time
Music
October 21, 2014
light and heat
Music
October 18, 2014
oh no itsa g
Music
October 10, 2014
a quick woodland waltz
open windows falling apart
post waltz
yossy says
Music
October 6, 2014
possum pocket
Music
October 2, 2014
back are we and
Music
August 29, 2014
something newer
something over
Music
August 26, 2014
boots
damp with envy
Music
August 20, 2014
nude brick
Music
August 19, 2014
sundays are far away
Music
August 13, 2014
hookworms
licecap 1.25beta 3
July 24, 2014
I just posted (to our prerelease site) LICEcap 1.25 beta 3, which includes support for using transparency for smaller images. I had some fun debugging this (including some very stupid mistakes on my part that took about an entire day to debug, oops).

I also had some fun writing logic to decide what to do when a pixel could be encoded as transparent, but also could be well-represented by an indexed color. Iniitially I had it only use the indexed value if the previous pixel was indexed, but it ended up being quite a bit better to do track the occurence of transparent pixels and pixels of that index, and use the one that is more common. There is probably a better algorithm to use here, but that saw some good gains. For comparison, I ran 1.24 and 1.25 beta 3 at the same time for a stupid demo video. The 1.24 version was 2.5MB, the 1.25 beta 3 version was 1.5MB. WIN. It ultimately is highly dependent on the content, though, so I might look at trying some other things out...



1 Comment

checking assumptions
July 15, 2014
Very often in computer code integers are divided by powers of two, such as 2, 4, 8, 16, 32, 64, and so on). These divisions are much faster than dividing by other numbers (since computers represent the underlying number in binary). In C/C++, there are two ways this type of division is typically expressed: normal division (x/256), or a shift (x<<8). These two methods produce the same results for non-negative values, and depending on the meaning of the code in question, (x/256) is often more readable, and thus I use it regularly.

If x is signed and is negative, division rounds towards 0, whereas the shift rounds towards negative infinity, but in situations where rounding of negative values is not important, I had generally assumed that modern compilers would generate similarly efficient code (reducing x/256 into a single x86 sar instruction).

It turns out, testing with a two modern compilers (and one slightly out of date compiler), this is not the case.

Here is the C code:

  void b(int r);
  void f(int v)
  {
    int i;
    for (i=0;i<v/256;i++) b(v > 0 ? v/65536 : 0);
  }

  void f2(int v)
  {
    int i;
    for (i=0;i<(v>>8);i++) b(v > 0 ? v >> 16 : 0);
  }

Ideally, a compiler should generate identical code for each of these functions. In the case of the loop counter, if v is less than 0, how it is rounded makes no difference. In the case of the parameter to b(), the code v >> 16 is only evaluated if v is known to be above 0.

Let's look at the output of some compilers (removing decoration and unrelated code). I've marked some code as bold to signify instructions that could be eliminated (with slight changes to the surrounding instructions):

Conclusions?

I'm not sure what to say here -- modern compilers can generate a lot of really good code, especially looking at floating point and SSE, but this makes me feel as though some of the basics have been neglected. If I were a better programmer I'd go dig into LLVM and GCC and submit patches.

I should have also tested ICC, but I've spent enough time on this, and the only ICC version we use is old enough that I would just regret not using the latest.

For comparison, here is what I would like to see LLVM generate for f():


f():                              # using divides
	cmpl	$256, %edi
	jl	LBB0_3            # if v is less than 256, skip loop

	movl	%edi, %ebx
	movl	%edi, %ebp
	shrl	$8, %ebx          # ebx = v/256, since v is non-negative
	shrl	$16, %ebp         # ebp = v/65536, since v is non-negative

LBB0_2:
	movl	%ebp, %edi
	callq	_b
	decl    %ebx
	jnz	LBB0_2
LBB0_3:

Performance-wise, I'm sure they wouldn't differ in any meaningful way, but the decrease in size would be nice.

Finally: write for the compiler you have, not the compiler you wish you had. When performance is important, use shifts instead of divides, or use unsigned types (which really should generate the same code for (x/256) vs (x>>8)). Move as much logic out of the loop as you can -- yes, the compiler might be able to do it for you, but why depend on that? But most important of all: test your assumptions.

5 Comments
Music
July 8, 2014
heartburn
The office today
July 7, 2014



3 Comments

Music
July 3, 2014
simon - 1 -- [2:42]
simon - 2 -- [4:15]
simon - 3 -- [6:52]
simon - 4 -- [2:39]
simon - 5 -- [4:13]
simon - 6 -- [3:28]
simon - 7 -- [3:00]
simon - 8 -- [2:46]
simon - 9 -- [7:52]
simon - 10 -- [3:03]
simon - 11 -- [4:58]
I've been investigating, once again, the performance of drawing code-rendered RGBA bitmaps to NSViews in OSX. I found that on my Retina Macbook Pro (when the application was not in low-resolution legacy mode), calling CGContextSetInterpolationQuality with kCGInterpolationNone would cause CGContextDrawImage() to be more than twice as fast (with less filtering of the image, which was a fair tradeoff and often desired).

The above performance gain aside, I am still not satisfied with the bitmap drawing performance on recent OSX versions, which has led me to benchmark SWELL's blitting code. My test uses the LICE test application, with a screen full of lines, an opaque NSView, and 720x500 resolution.

OSX 10.6 vs 10.8 on a C2D iMac

My (C2D 2.93GHz) iMac running 10.6 easily runs the benchmark at close to 60 FPS, using about 45% of one core, with the BitBlt() call typically taking 1ms for each frame.

Here is a profile -- note that CGContextDrawImage() accounts for a modest 3.9% of the total CPU use:


It might be possible to reduce the work required by changing our bitmap representation from ABGR to RGBA (avoiding sseCGSConvertXXXX8888TransposeMask and performing a memcpy() instead), but in my opinion 1ms for a good sized blit (and less than 4% of total CPU time for this demo) is totally acceptable.

I then rebooted the C2D iMac into OSX 10.8 (Mountain Lion) for a similar test.

Running the same benchmark on the same hardware in Mountain Lion, we see that each call to BitBlt() takes over 6ms, the application struggles to exceed 57 FPS, and the CPU usage is much higher, at about 73% of a core.

Here is the time sampling of the CGContextDrawImage() -- in this case it accounts for 36% of the total CPU use!


Looking at the difference between these functions, it is obvious where most of the additional processing takes place -- within img_colormatch_read and CGColorTransformConvertData, where it apparently applies color matching transformations.

I'm happy that Apple cares about color matching, but to force it on (without allowing developers control over it) is wasteful. I'd much rather have the ability transform the colors before rendering, and be able to quickly blit to screen, than to have to have every single pixel pushed to the screen color transformed. There may be some magical way to pass the right colorspace value to CGCreateImage() to bypass this, but I have not found it yet (and I have spent a great deal of time looking, and trying things like querying the monitor's colorspace).

That's what OpenGL is for!
But wait, you say -- the preferred way to quickly draw to screen is OpenGL.

Updating a complex project to use OpenGL would be a lot of work, but for this test project I did implement a very naive OpenGL blit, which enabled an OpenGL context for the view and created a texture for drawing each frame, more or less like:

    glDisable(GL_TEXTURE_2D);
    glEnable(GL_TEXTURE_RECTANGLE_EXT);

    GLuint texid=0;
    glGenTextures(1, &texid);
    glBindTexture(GL_TEXTURE_RECTANGLE_EXT, texid);
    glPixelStorei(GL_UNPACK_ROW_LENGTH, sw);
    glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER,  GL_LINEAR);
    glTexImage2D(GL_TEXTURE_RECTANGLE_EXT,0,GL_RGBA8,w,h,0,GL_BGRA,GL_UNSIGNED_INT_8_8_8_8, p);

    glBegin(GL_QUADS);

    glTexCoord2f(0.0f, 0.0f);
    glVertex2f(-1,1);
    glTexCoord2f(0.0f, h);
    glVertex2f(-1,-1);
    glTexCoord2f(w,h);
    glVertex2f(1,-1);
    glTexCoord2f(w, 0.0f);
    glVertex2f(1,1);

    glEnd();

    glDeleteTextures(1,&texid);
    glFlush();
This resulted in better performance on OSX 10.8, each BitBlt() taking about 3ms, framerate increasing to 58, and the CPU use going down to about 50% of a core. It's an improvement over CoreGraphics, but still not as fast as CoreGraphics on 10.6.

The memory use when using OpenGL blitting increased by about 10MB, which may not sound like much, but if you are drawing to many views, the RAM use would potentially increase with each view.

I also tested the OpenGL implementation on 10.6, but it was significantly slower than CoreGraphics: 3ms per frame, nearly 60 FPS but CPU use was 60% of a core, so if you do ever implement OpenGL blitting, you will probably want to disable it for 10.6 and earlier.

Core 2 Duo?! That's ancient, get a new computer!
After testing on the C2D, I moved back to my modern quad-core i7 Retina Macbook Pro running 10.9 (Mavericks) and did some similar tests.

Interestingly, "Low Resolution" mode is faster in all modes except for GL, where apparently it is slower (I'm guessing because the hardware accelerates the GL scaling, whereas "Low Resolution" mode puts it through a software-scaler at the end.

Let's see where the time is spent in the "Normal, Low Resolution" mode:

This looks very similar to the 10.8, non-retina rendering, though some function names have changed. There is the familiar img_colormatch_read/CGColorTransformConvertData call which is eating a good chunk of CPU. The ripc_RenderImage/ripd_Mark/argb32_image stack is similar to 10.8, and reasonable in CPU cycles consumed.

Looking at the Low Resolution mode, it really does behave similar to that of 10.8 (though it's depressing to see that it still takes as long to run on an i7 as 10.8 did on a C2D, hmm). Let's look at the full-resolution Retina mode:

img_colormatch_read is present once again, but what's new is that ripc_RenderImage/ripd_Mark/argb32_image have a new implementation, calling argb32_image_mark_RGB24 -- and argb32_image_mark_RGB24 is a beast! It uses more CPU than just about anything else. What is going on there?

Conclusions
If you ever feel as if modern OSX versions have gotten slower when it comes to updating the screen, you would be right. The basic method of drawing ixels rendered in a platform-independent fashion to screen has gotten significantly slower since Snow Leopard, most likely in the name of color-accuracy. In my opinion this is an oversight on Apple's part, and they should extend the CoreGraphics APIs to allow manual application of color correction.

Additionally, I'm suspicious that something odd is going on within the function argb32_image_mark_RGB24, which appears to only be used on Retina displays, and that the performance of that function should be evaluated. Improving the efficiency of that function would have a positive impact on the performance of many third party applications (including REAPER).

If anybody has an interest in duplicating these results or doing further testing, I have pushed the updates to the LICE test application to our WDL git repository (see WDL/lice/test/).

Update: July 3, 2014
After some more work, I've managed to get the CPU use down to a respectable level in non-Retina mode (10.8 on the iMac, 10.9/Low Resolution on the Retina MBP), by using the system monitor's colorspace:

    CMProfileRef systemMonitorProfile = NULL;
    CMError getProfileErr = CMGetSystemProfile(&systemMonitorProfile);
    if(noErr == getProfileErr)
    {
      cs = CGColorSpaceCreateWithPlatformColorSpace(systemMonitorProfile);
      CMCloseProfile(systemMonitorProfile);
    }
Using this colorspace with CGContextCreateImage prevents CGContextDrawImage from calling img_colormatch_read/CGColorTransformConvertData/etc. On the C2D 10.8, it gets it down to 1-2ms per frame, which is reasonable.

However, this mode is appears to be slower on the Retina MBP in high resolution mode, as it calls argb32_image_mark_RGB32 instead of argb32_image_mark_RGB24 (presumably operating on my buffer directly rather than the intermediate colorspace-converted buffer), which is even slower.

Update: July 3, 2014, later
OK, if you provide a bitmap that is twice the size of the drawing rect, you can avoid argb32_image_mark_RGBXX, and get the Retina display to update in about 5-7ms, which is a good improvement (but by no means impressive, given how powerful this machine is). I made a very simple software scaler (that turns each pixel into 4), and it uses very little CPU. So this is acceptable as a workaround (though Apple should really optimize their implementation). We're at least around 6ms, which is way better than 12-14ms (or 29ms which is where we were last week!), but there's no reason this can't be faster. Update (2017): the mentioned method was only "faster" because it triggered multiprocessing, see this new post for more information.

As a nice side effect, I'm adding SWELL_IsRetinaDC(), so we can start making some things Retina aware -- JSFX GUIs would be a good place to start...

5 Comments

Saturday morning
May 10, 2014



Comment...

Music
April 9, 2014
i did what exactly2
Music
April 7, 2014
not out of time
Music
April 2, 2014
no joke
Music
March 29, 2014
temporary
Music
March 27, 2014
another go
foomacmy
Music
March 26, 2014
waiting
Music
March 24, 2014
blerg
Music
March 13, 2014
fuckinhell wukka wukka


2 Comments
search : rss : recent comments : Copyright © 2024 Justin Frankel