Sony Q&A: The must-have sensor tech of the future?


posted Tuesday, June 16, 2015 at 12:55 PM EDT


Sony unleashed a trio of new cameras last week, with some pretty extraordinary specs ranging from a 42 megapixel backside-illuminated, 100K ISO, phase-detect-AF full-frame monster (albeit a very svelte monster!) to a pair of new RX-series models based on a blazingly fast 1-inch sensor. (How does 1,000 frames-per-second video sound?)

Shortly after the announcement, I had an opportunity to interview Mr. Kimio Maki, Senior General Manager, Digital Imaging Business Group, Sony Corporation. I'd interviewed Mr. Maki once before, back in December 2013. Back then, Sony hadn't made the transition to the Alpha brand name for their mirrorless models, the Sony RX1 and RX1R were brand new, as were the original QX-series lens-cameras, and nobody -- outside of Sony, at least -- had any idea of just how much they were going to stir the camera market in the years to come. Maki-san is an interesting guy to talk to, because he more than anyone else has been responsible for the product directions that have so dramatically changed Sony's fortunes in the camera business.


Interview with Kimio Maki, Sony Corp.

Dave Etchells/Imaging Resource: Thanks as always for making time for me. We had no idea of what was coming, so I didn't have as much chance to prepare questions for you, but of course I had plenty that occurred to me during the presentation.

Kimio Maki
Senior General Manager
Digital imaging Business Group
Sony Corporation

Kimio Maki/Sony Corp: [some joking and laughter over my typically geeky technical questions] You may have multiple questions, but actually, a 30 minute presentation is not long enough to let you understand everything, therefore we also have some samples to show you as a demonstration. These will help you see the true potential of the product itself.

DE: Oh, great!

[Ed. Note: Maki-san proceeded to show me a number of high-quality prints from the Sony A7R II, with each shot shown full-frame at 13x19 inches, and then a crop magnified to 1:1 relative to the printer's pixels (I don't know the resolution of the printer, so can't say what the equivalent print size might be, but would estimate that it had to be somewhere in the range of 20x24 or 24x36 inches.

He didn't know at what ISO sensitivity the shots were captured, but did say that they were above base ISO. Obviously, the pictures were worth at least the usual 1,000 words, so I can't do them justice here, but I will say that the detail was really remarkable, and I've looked at a lot of prints from high-resolution digital cameras.

(It will be interesting to get a sample of the camera into our lab, and then compare its resolution against that of the medium-format Pentax 645Z, which has so far been the reference for ultimate resolution in our lab shots.]

DE: My first quick question: I had always understood that backside-illumination was much more of a factor for really small sensors, and that you get progressively less advantage as they get bigger. Now I guess with 42 megapixels, the pixels are still maybe small, but I was surprised that there would be enough extra benefit to go to the trouble of BSI

KM: It is [a] small [benefit from BSI]. Let me explain about why we chose this size. I didn't decide on 42 megapixels first, then the sensor was created. When we created these devices, I prioritized ... I don't like to lose sensitivity, compared to the 36-megapixel [sensor], right? And also, I really wanted to realize 4K movies. They have to read/write faster, right? So 4K movies had to just fit, and also ISO sensitivity should be better than 36-megapixel sensors. So how many pixels is the best pixel size? Let's calculate it! That was the process. From that, the conclusion was that 42-megapixel was the best pixel size to realize 4K movies...

Bucking the trend: Thanks in no small part to Maki-san's efforts, Sony has seen both its interchangeable-lens and premium compact product lines grow, despite general contraction in the camera market. This time, I was most interested in learning a few more details about the trio of products that were announced, particularly the sensor and processor technology that led to their groundbreaking capabilites. Mr. Maki obliged with a generous slice of his time, and you can read the results of that conversation here.

DE: Ah, I see.

KM: ...on a per-pixel basis, and also to have a Super 35mm base; both were key. Super 35mm gives the best picture quality, from oversampling 15 megapixels down to the eight-megapixel 4K size. The picture quality is better than a professional video camera.

DE: Yeah, yeah.

KM: And also, in terms of ISO, this sensor's sensitivity is better than 36-megapixel chips. Sensitivity is more than 36-megapixel sensors. It's good. And also [the engineer's] conclusion was that 42 megapixels was best. But 42 is not a good word. <chuckles>

DE: <laughs> Yeah.

KM: If you say 45, or something, it's understandable.

DE: 45, 50...

KM: Yeah, 45 or 50, it's very clear for people. But actually, the engineers said "if you prefer 50, you have to lose sensitivity." And also, that number doesn't fit 4K movie at all.
[Ed. Note: That is, the pixel counts don't line up well with 4K movie pixel dimensions, to provide optimal results when downsampling.]

DE: Yeah, yeah.

KM: Of course, you could use 50 megapixels, but actually, the processor has to work harder, therefore it would be difficult.

While the A7R II can record 4K movies using the full area of the sensor, Sony called particular attention to its Super35 crop mode. Super 35 frame dimensions vary between manufacturers and even within a manufacturer's product line, but are generally in the range of a 1.4-1.5x crop factor relative to full frame - so they're kind of like a HD aspect ratio version of APS-C.

The big story with the A7R II's Super 35 crop mode is that the camera's sampling roughly 15 megapixels down to the eight megapixels of 4K frames without doing any line-skipping. The result is incredibly sharp 4K video.

DE: It sounds like the specific pixel count is really driven a lot by 4K, and then there was the need for sensitivity You really wanted to do everything possible in that area, so even though it's a large sensor, you still used backside-illuminated technology to get a little more signal to noise ratio.

KM: Exactly. So as a result of other calculations, 42-megapixel is what came out. Then, in order to realize this kind of sensitivity, we have to prefer backside-illuminated. That's the first priority, to gather lots of light, more light, effectively. The other decision was in the wiring layers; we used to use the aluminum, but aluminum is not good enough to transfer the data faster. Therefore we change the material from the aluminum to the copper to make it faster. [Ed. Note: Copper is more conductive than aluminum, so there's less resistance in the circuit lines on the chip, meaning signals can propagate faster.] Therefore, then we could get a faster transition from the light to an electrical signal.

So we could identify the imaging sensor [characteristics]; the story came from the sensitivity, 4K-suitability. From that the number of pixels was decided. We also looked at what is the best way to create such a sensor and reversed it to get lots of light and change the [metallization] material to get faster. That's the kind of process we went through.

DE: Great, great.

KM: Therefore we didn't just pick any resolution, and ended up with 42-megapixel. Of course 50 [would] give better resolution, and 45 would be more impressive, but it wouldn't be perfectly balanced.

DE: Yeah, the story is that 42 megapixels is what the engineering led you to.

KM: Yeah, that's right.

DE: How does the noise level compare to that of the Sony A7R? You know, the A7R's resolution is 36 megapixels, but it's sensor technology is older, and so perhaps less advanced.

KM: It is better.

DE: The A7R Mark II has better noise performance; better than the A7R.

KM: Better than the other one. Because the copper layer is a big improvement over aluminum.

Copper metallization has been around the IC industry for years now - the photo at left is courtesy IBM Corporation, circa 1997.

Because copper is 40% more conductive than aluminum, it can increase circuit speeds and reduce power consumption. We believe Samsung was the first with copper metallization on the NX1's sensor, and now Sony's brought it to their product line in a big way.

We were surprised to learn that copper metallization helps image noise as well as speed.

DE: Oh, and so that helps the noise, not just the speed?

KM: Yes, a large, large role. Yeah.

DE: Oh, interesting. I would have though it would be, just... it would be faster, I didn't realize it would improve your noise levels as well.

This next question is more of a request maybe, but we've had a lot of questions asking about raw format. And...

KM: Ah, raw. <laughs> 14-bit.

DE: Yeah, well 14-bit is OK, but many people are asking "could we please have uncompressed RAWs?"

KM: Sony RAW is compressed, not uncompressed. But if we're getting a lot of requests for it, we should make such a kind of no-compression raw. Of course we recognize that. But I cannot give you a guarantee when we're going to fix or not fix.

DE: Right. When you're going to address that, yeah.

KM: Sure, sure. And so we recognize the customer's requirement, and actually we are working on it.

DE: So it's something that you're aware of. I'm sure that the image processing pipeline is optimized for the way that it is now, but it seems to me that, while it might involve some trading off some performance, that it could just be a firmware change. Could it? Would you be able to provide uncompressed raw as a firmware update, or would it require new hardware?

KM: Right, yes. So... not hardware.

DE: It is firmware. OK, good! I think people would be willing to accept a slower transfer time or lower frame rate in an uncompressed mode. Some people really, really want that.

So the actual data readout from the A7R II sensor, I think I heard in the presentation that it's three times faster than the A7R. Does that mean that rolling shutter is reduced by that same proportion? Is it the case that rolling shutter is one-third as much with the A7R II vs the original A7R, because data readout is three times faster?

KM: There were many factors, the total is 3.5 [times faster]. As I said, the materials changed, and also the layout was changed, those are the main reasons.

DE: Yes - what I was actually asking was whether the rolling shutter becomes better by that same ratio. So 3.5 times faster; do you now have only one-third as much rolling shutter?

Mark Weir/Sony Electronics: I don't know that there's a numerical, arithmetic ratio.

DE: OK - I just figured rolling shutter was maybe tied to how quickly you could read it out, so if you could do readout faster...

MW: Well, it's not so much how fast you can read it out, it's how fast you scan it.

KM: So we're not talking about the speed of the rolling shutter [Ed. Note: That is, Sony is not focusing on promoting it as a key advantage of the A7R II], but of course when we presented about the 1-inch sensor, rolling shutter was faster there, around five times faster, than before. With the 1-inch sensor, before we developed such a sensor, in A/D conversion, one A/D converter [Ed. Note: Analog to Digital converter, the circuitry that translates an analog signal into digital values] controlled four pixels.

DE: Ah! Four pixels?

KM: Four pixels. Four pixels together.

DE: Oh, not just a line of A/D converters at the top and bottom of each column, then...

KM: Yes, four pixels together. And also, we read out the groups of four pixel in two packages, that means eight pixels together at one time.

DE: Ah.

KM: Before, the A/D converter was laid out on the topside and bottom side of the array, but now, we have the stacked structure. Now, we moved the A/D converter to the underside of the pixels.

DE: Oh, on the underside. So the A/D is on the back side of the sensor chip...

KM: Yes, this means we could increase the number of A/D converters, then we could read eight packages. Eight packages of pixels.This is four times more than it used to be. Therefore, we can read out four times faster than before.
[Ed. Note: It sounds like previously, when they had the A/D converters along the top and bottom sides of the sensor array, they actually had two rows of converters per side (each column of pixels has its own sets of converters), so they could read out a total of four pixels at a time, in two groups, one from the top row of converters, the other from the bottom row of converters.

Now, with the A/D converters on the back side of the chip, they have four times as many of them, so they can read out a total of 16 pixels at a time, four pixels each from four sets of converters, each "set" being four converters. This is a pretty staggering number of A/D converters: There are 5,472 columns of pixels in the Sony RX100 IV's sensor; with 16 converters per column, that means there are a total of 87,552 A/D converters on the chip!]

KM: I can say that also, it's not only the package read rate, other things are improved also, therefore the total improvement [is] five times faster than before.

We couldn't find an illustration of a more recent chip, but this photo shows how the A/D converters are arranged in a row along the long side of the sensor. Newer versions of this technology have A/D converters arranged at both the top and bottom of the sensor array. And now, with the sensor used in the RX10 II and RX100 IV, Sony has moved them to the back of the chip.

DE: So previously, the A/Ds were just at the edges of the array, but now that they're on the backside of the chip, you can have more of them and they can be distributed throughout the array.

KM: Exactly, yes.

DE: Ah!

KM: Then we also put the memory under the A/D converters. Then move [the data from] the A/D converters straight into the memory and then outside to the main chip.

DE: And so the memory is this stacked structure, it's not like hybrid packaging where you have chips glued on to chips, this is actually...

[Ed. Note: There was a lot of discussion here that I'll skip over. At first, it sounded to me like Maki-san was talking about memory circuitry integrated directly on the back of the sensor chip, but it turns out that a separate memory and processor chip are attached to the back of the sensor. They looked like pieces of silicon themselves, very thin slivers of material apparently bonded directly to the sensor itself. (Perhaps using some sort of solder-bump technology?) The macro shot below shows the back of the sensor package, with the memory and processor chips attached.]

DE: So this is really the first time that anybody has put circuitry on the back side of the sensor?
[Ed. Note: I was still thinking the circuitry was integrated in the sensor silicon itself at this point.]

KM: That's right, and then that is little bit different from the mobile chips. The mobile phone sensors have a memory, but actually we have got a processor plus memory.

DE: You have processing plus memory.

KM: Signal processing together with memory.

DE: So the mobile phone sensors do have circuitry on the backside...

KM: Yeah, that's right.

DE: ...but it's just memory, whereas... And so now you actually have processors, you know, arithmetic processing on the back side.
[Ed. Note: They show me a sample of the sensor, with the two other chips on the back side of it - see further down for a photo of it.]

DE: Oh, OK.

KM: The point is, we have memory stacked right outside to the main chip, because the speed of the readout is limited. Therefore we need some memory to stack.

DE: Yeah, yeah.

KM: The memory data have to wait for the interface. Therefore, the readout rate is very fast, data goes to the memory, then memory has to wait for the speed of the interface which goes to the main chip.

DE: Yeah. That's been a big problem for system design in computers, everywhere, that you have this pipe between the processor and memory that's a real bottleneck.

KM: Exactly, that's right. Therefore the interface is very important, the speed of the interface is quite important.

This graphic from the presentation introducing the RX100 Mark IV and RX10 Mark II illustrates the concept of having both high-speed signal processing and DRAM chips attached directly to the back of the sensor chip. (See below for a photograph of the back of the chip itself.)

DE: And so you've distributed the memory and the processing. The processing that's on the back of the chip, then, this isn't like a full-blown image processor as have existed separately? These are presumably much simpler chips. Are they just doing things like the interpolation or the pixel-combining, and that sort of thing for video?

KM: Yeah, before we did.

DE: For video, yeah.

KM: But in terms of one-inch sensor we don't do anything, we are using everything and no pixel binning.

DE: Yeah - I didn't actually mean binning, I was thinking that it was... Because you have to translate from many more pixels to just the 4K size, and so I'm thinking it's not a binning, but it's a...

KM: Ah, yes!

DE: ...arithmetic, you know...

KM: Two by two, two by two...

DE: Yes. So it calculates an average, or it fits a curve or something.

KM: That's right. We are doing two by two.

DE: And so that all happens on the back of the chip. And is any noise processing there too, on the back of the chip, or is that more...

KM: Yes, and we are not doing noise reduction on the backside. We are doing the noise reduction on the main chip.

DE: On the main chip, OK.

KM: On the main chip.

DE: Ah, OK. I see.

[Ed. Note: To clarify the above, the new sensor in the Sony RX10 II and RX100 IV has both a low-level processor chip and a memory chip on the back of it. The memory chip basically acts as a very fast buffer memory, because it's coupled so closely with the sensor. It probably has a very "wide" structure, so it can clock in many pixels' worth of data on each cycle. Then the processor chip can rapidly access the data there, to downsample to the number of pixels needed for video resolutions, including 4K. (Meaning that this memory will necessarily be dual-ported or even triple-ported, to buffer transfer of the 4K data stream to the off-sensor processor.) I didn't think to ask Maki-san just how much memory was there, but I don't think it would be a large number of full-resolution frames. (On the other hand, though, it might be.)

It seems to me that the most important function of the back-of-chip memory would be to buffer sensor data for 4K video processing. Rather than using pixel-binning or line-skipping to reduce the pixel count, these new RX models sample *every* pixel on the sensor, and then perform a 2x2 pixel convolution or averaging digitally. That's an enormous amount of processing. Just in terms of the raw data, they've got 5,472 x 3,080 sensor pixels in the 16:9 aspect ratio, a total of 16.9 megapixels, that they're reading out at 30 frames/second. That's a bit over 500 million pixels per second, and they have to "slide" the convolution operator over them, to make an awkward 1.425:1 reduction in pixel count in each direction. So there's likely 4x that many multiplication and addition operations taking place. Simple math suggests that it's thus taking ~4 billion multiplications/second to turn the sensor data into a 4K video stream.]

KM: So this is an engineering conversation, right? <laughs> I [just] sell, so this is very hard to explain about, but sometimes I may say everything. <laughs>
[Ed. Note: I think Maki-san is joking that he worries sometimes that he will say too much, in answering all my questions :-) ]

DE: Oh, no, that's ok. No, we're good. I'll move on to less-deep things. Ah -- but this one more is slightly deep... For phase-detect autofocus, it seems to me that there's maybe...

KM: You mean phase-detect autofocus on... full-frame, right?

DE: Yes, on the chip, on the full-frame sensor. It seems like a challenge would be to be able to read that data out quickly enough to be able to do a fast autofocus. Do you read out the -- I'll just call them focus pixels, that was Apple's very descriptive term for them -- do you read out those phase-detect pixels more rapidly than the viewfinder refresh? I mean, can you go in and just read those pixels from the sensor separately from the viewfinder refresh?

KM: Yes. It is faster. That kind of latency is not good... not exactly the same one. The EVF is a little bit delayed, the delay is 0.2v.
[Ed. Note: I'm not sure what Maki-san meant by 0.2v - at least it sounded like "v". The main point is that there's too much latency and too low a refresh rate in the EVF update cycle, so they need to read out the phase-detect pixels separately and more rapidly from the EVF refresh, in order for the autofocus to be able to work quickly enough. I think he said in some of the back and forth that's not included here that the focus readout was twice as fast at that for the EVF refresh.]

DE: What about the low-light limit when you have phase-detect pixels? It seems to me that the separate phase-detect sensors like in an SLR, they can have big pixels, but on the sensor chip, we have, I mean they're limited to the size of the actual pixels.

KM: That's a good question. So, when we use... in terms of autofocus, we are using a hybrid focus, so first we use the phase-detection focus to get close to the object. Then near the object, we start to use contrast AF to get a clear peak. Then we adjust the focus. Then in order to make very precise focus (because 42-megapixel means very very small pixels), and also the focusing point could be a very small one, therefore we phase-detection together with contrast AF, contrast focus.

DE: Right, right. And does that help you deal with lower light levels, as well?

KM: Yes, yes.

[Ed. Note: Again, there was some back and forth here that was pretty fragmented, so I didn't include it. The central point seemed to be that, especially in low-light situations, the phase-detect AF wouldn't be accurate enough by itself, so they use a hybrid of phase-detect AF, to get into the right ballpark, then contrast-detect AF, to find where the actual point of best focus is.

This answered a question I'd always had about on-chip phase-detect focusing, in that the image sensor pixels are quite small, and have to be read out quickly (meaning must have a short exposure time), to have a responsive AF system. The phase-detect pixels are also shaded, to "see" only light rays coming from one side or the other, so they're only getting half as much light as normal image-forming ones. Under low-light conditions, both factors result in a small focus signal with lots of noise on it. Contrast-detect AF is more capable under those conditions, because it's looking across a larger number of pixels to develop it's goodness-of-focus signal.]

DE: Let me go back to the RX series, the one-inch sensors again. It's a new chip, obviously, with a number of improvements and many more A/D converters. Did those changes improve the basic signal to noise ratio coming from the sensor, or is it really that it's not such a big difference there, it's more just about increased speed?

KM: In terms of noise reduction, that process is existing in a different part, therefore that's directly...

DE: Yeah. I wasn't meaning so much noise reduction, but the actual signal coming from the sensor. Is it any cleaner?

KM: Yes, it's cleaner.

DE: Hmm.

KM: And so we needed to see the back side, here.
[Ed. Note: Maki-san proceeded to show me a little display card with a sensor on it that I'd seen upstairs at the event, but hadn't noticed that there was a cutout in it, so you could see the back of the chip as well.]

DE: Oh, yeah.

KM: Or here. Can you see? Can you see the back side of this? The normal one doesn't have this kind of hole. [Ed. Note: There's an opening in the back of the sensor package, exposing the back side of the sensor chip itself.] This is algorithm [Ed. Note: Meaning that it's the processor chip], two pieces of silicon are there, one is memory, one is logic. Can you see that one?

DE: There's two pieces of silicon. Ah, yes! I see there's... If you get the reflection just right...

KM: That's right, yeah.

DE: So with the RAM, you said that it was DRAM, so it's not static RAM?

KM: Yes, DRAM.

DE: I would think that having the RAM and processor on the back of the chip would cause further heat problems. I mean...

KM: No. Not really.

DE: Not really? Huh, I wouldn't have imagined that!

KM: In terms of heat.

DE: Interesting. I guess the processor there is doing only fairly limited processing; it's not like you have a big DSP or something, so...

KM: Yeah, exactly.
[Ed. Note: It wasn't clear on the audio recording, but I think he said here that the sensor itself generates more heat than the mini-processor or memory.]

DE: One claim in the presentation was that there's really no rolling shutter, that it's really not an issue with this new one-inch chip now, it's really not an issue.

KM: Ah, well, the chip in the RX, it's not nothing.

DE: Yes, of course, there'll always be some rolling shutter.

KM: It still exists. It's getting smaller, five times smaller than before. Because it's five times faster than before, that's what slanting like this. [Ed. Note: Maki-san was gesturing to indicate that, while there's still distortion in moving objects, it's very small.] This is faster, too small. You get it?

DE: Yeah, yeah, I understand....

KM: Yeah. It's not a global shutter, so in terms of global shutter, that reads everything down to the memory. Pixel by pixel, the data goes straight to memory.
[Ed. Note: He's not speaking of digital memory at this point, but rather that, with a global shutter, the information from the pixels is transferred into analog memory (essentially storage capacitors, integrated into the sensor chip) all at once, across the entire array.]

DE: Yeah, yeah.

So-called rolling shutter (a term from the days of film-based movie cameras) is the bane of action videographers. The progressive capture of each frame can cause severe distortion of moving subjects. This chopper's rotor blades aren't actually curved like that :-)

"Jamtlands Flyg EC120B Colibri" by Jonen. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

KM: But actually, that is a very difficult technology to create. On the manufacturing side, on the design side.

DE: Yeah.

KM: So actually, this is the alternative, to realize a very similar thing. Of course, there is rolling shutter, but we can achieve a faster rolling shutter.

DE: If it's fast enough, then...

KM: It's kind of anti-distortional rolling shutter - we're going to call it Anti-Distortion Rolling Shutter. So therefore... I think it is good enough to make sure to avoid the distortion.

DE: Yeah, it doesn't matter whether it's global or it's just very quick...

KM: That's right, yeah.

DE: No-one's going to notice anyway.

MW: And the structure of global shutter is totally different.

DE: Yeah, it's very difficult. In fact, I know, global shutter was easier to do in CCDs I think...

KM: That's right, yeah.

DE: But CMOS, CMOS it's very hard, yeah. Oh, that was one question. So on the A7R II, there's an all-electronic shutter mode, a silent shutter mode.

KM: Yeah.

DE: Our sense, and I'm not sure if we had seen it in... It might not have been a Sony product, it might have been somebody else's, but I know that there was at least one camera we had in the lab not long ago that reduced the bit depth in the raw files when you used its all-electronic shutter mode. And so I'm wondering, I guess there's a tradeoff there. What is the tradeoff in terms of image quality or noise level or whatever, when you're using an entirely electronic shutter as opposed to a mechanical shutter?

KM: Just distortion.

MW: Yeah. Otherwise, why put the mechanism in?

KM: Yeah.

MW: There are things that the shutter blades can do that the image sensor turning off quickly enough cannot. So there are pluses and minuses, I think what we're going to have to do is put some information together about the pluses and minuses.

DE: Mmm, that would be really good to see, yeah. Because I don't think that information really exists out there. Tell us, and we'll tell the world.

DE: Well that's actually all the questions I have here. I appreciate it very, very much.

KM: Woo! <laughter>

DE: You got through another one.

MW: Oh, and we have a demonstration for you as well.

Demonstration: 4K video from Super 35 crop mode

Sony seemed particularly proud of 4K video recorded by the A7R II in its Super35 crop mode, so they showed me some sample video shot in that mode. As mentioned above, this mode takes 15 sensor megapixels and samples them down to produce the 8-megapixel 4K video output. The result is a phenomenally sharp, clear 4K picture, shown on on a huge (65-inch?), drool-worthy Sony 4K TV. I tried taking some photos of the screen (combining full-screen and semi-macro shots of portions of it, but they hardly did the experience justice: You'll just have to take my word for it that the video was simply jaw-dropping in its crisp detail.

Hands-on with SLR lenses

One of the key features of the A7R II is that Sony claims it can autofocus conventional SLR lenses mounted via adapters very quickly. I had the opportunity to try an A7R II with a Canon EF 24-70mm attached via a Metabones "smart" adapter. Even though I was in a pretty dimly-lit conference room at the time, the A7R II seemed extremely responsive, with very little delay between me half-pressing the shutter button and the beep of focus-confirmation. I wondered aloud whether focusing with SLR lenses employed a contrast-detect cycle, to which Sony's Mark Weir replied that the A7R Mark II was using only phase-detect AF, vs the two-stage hybrid approach Maki-san mentioned earlier.

I did find that the camera could sometimes get a little confused when the subject was far out of focus, sometimes initially moving in the wrong direction, but when the subject was out of focus by an amount more typical of real-world situations, it was remarkably fast. Frankly, I've very often seen pure phase-detect SLRs do the same thing when subjects were far out of focus, so am not sure to what extent the behavior I saw in the A7R II was atypical. (You can be sure we'll spend some time looking at this when we get a sample in-house; we have one or two SLR lenses we can try out with it ;-)

In any case, at least with two different Canon EF lenses I played with on the A7R II (the second was a Canon 24-105mm L lens, under brighter conditions), AF seemed entirely fast enough for most uses. With the use of "smart" adapters to translate the focus-motor signals from camera to lens, the A7R II could well be the first truly "universal" camera body we've seen. (When I mentioned the cost of such adapters, Mark Weir pointed out that, while the groundbreaking Metabones Smart Adapters still go for $400, there now are competing models on the market these days, for as little as $100 apiece.)


Once again, Sony is using their sensor prowess to push the envelope in the camera market, with a uniquely powerful full-frame mirrorless camera, and two new RX models with amazing speed based on new sensor tech. All three new models can shoot 4K video and record it internally to a memory card, but that's just one feature among many, and not even one of the most important (in our opinion, at least) in any of cameras.

As has been the case with previous generations of Sony sensor tech, we can expect these new chips to show up in other makers' cameras a year or so from now, but for the moment, Sony bodies are the only game in town if you want their capabilities.

Of course, the proof is always in the shooting, so we can't wait to get these new models into the lab and put them through their paces. Stay tuned for that, you know we'll hustle sample photos out to you as soon as humanly possible! Meanwhile it sure is a great time to be a photographer!