Olympus Q&A @ CP+ 2017: A deep dive on the E-M1 II, autofocus testing & the future of image processing


posted Sunday, February 26, 2017 at 7:00 AM EDT


Every February, the CP+ tradeshow takes place in Yokohama, Japan. And each year, Imaging Resource's founder and publisher Dave Etchells heads to Japan for CP+, where he has the opportunity to sit down with executives from the biggest camera companies. In the second of our 2017 CP+ interview series, Dave chats with Olympus' Haruo Ogawa, Olympus Corp.'s Director, Senior Executive Managing Officer, Chief Technology Officer and Chief R&D Officer, as well as his colleague Toshiyuki Terada, who is General Manager of the Global Marketing Department in Olympus Corp.'s Imaging Global Marketing Division. Ogawa-san aswered the majority of Dave's questions in Japanese, with Terada-san ably providing translation, and contributing additional insights throughout.

Ogawa-san chatted with Dave about the Olympus E-M1 Mark II -- a camera for which we have a lot of love, as you'll see in our glowing review -- and shared some very interesting information regarding the inroads that the new mirrorless camera is making in the professional industry, particularly in Japan where Olympus has set up an extensive professional support system. Regarding the E-M1 Mark II's autofocus performance, Olympus also shared fascinating details on its internal autofocus testing procedures and why it believes its newest camera has the best, most sophisticated autofocus system on the market.

Additional topics for discussion included new techniques and technology offering the potential for further performance and image quality improvements, Olympus' compact camera lineup, and its plans for future lens development. Read our interview below for the full details!


Dave Etchells/Imaging Resource: Our impression is that the Olympus E-M1 Mark II is doing very well based on interest from our readers, and it seems like it is backordered a lot in the US market. Is it exceeding expectations, or are we still seeing some of the effects of the Kumamoto earthquakes on sensor supplies?

Haruo Ogawa
Senior Executive Managing Officer
Chief Technology Officer
Chief R&D Officer
Olympus Corp.

Haruo Ogawa/Olympus Corp.: First of all, thank you very much for your very good [review] and attention for our new flagship E-M1 Mark II. We received a bigger [volume of orders] than we expected, is one reason there is some shortage in the market.

DE: Because the demand has been so high, more so than you expected?

Toshiyuki Terada/Olympus Corp.: Yes.

HO: Regarding the Kumamoto earthquake, we did reschedule the launch [of the E-M1 II]. Originally, we were planning for a Photokina launch [with delivery to follow] during autumn, but we postponed to a December shipment because of the earthquake. During this period, we accumulated our manufacturing, [but we have still experienced a shortage in some markets such as the United States]. I think that in the next month [we can fulfil these.]

DE: You will catch up. Good, thank you.

HO: And also we [would] like to say thanks to our core Olympus fans, because once we introduced this product [with] the development announcement at Photokina, [they had to remain] patient, watching for a long time. Of course, we utilized this period to promote this product.

DE: You're using it, you're promoting, yeah.

HO: Using this period, the opportunity. And [the response was] very good from the markets [and also] from your readers, really positive. We would like to say thanks [to] all your readers and our fans. That's one reason we [have had a] good start with this camera.

DE: We are curious too about who the E-M1 Mark II owners are. Are they coming from the original E-M1 or other OM-D owners, or are you actually pulling photographers in from other areas, like DSLR owners or perhaps other brands of mirrorless? How does the ownership break down between the different groups?

The Olympus OM-D E-M1 Mark II mirrorless camera.

HO: According to our registration data for this period, most of the users are Olympus users and have used the previous E-M1. There are also step-up users from the E-M5 and E-M10. But they are overall Olympus users, and OM-D users are the main users.

On the other hand, for the professional market in Japan for example, we have a dedicated pro support team. We have our own contact [with professional photographers] and once we introduced this product, not only our current professional users, but also the DSLR-type of professional photographers [are] attracted to us and our product and start to purchase. This is a little bit different behavior compared to the previous models.

DE: Ah, interesting. Have you had the professional support organization even with the previous model, or is that something new?

HO: Sure. Once we introduced the E-M1, we started doing something more productive in the professional market. We contacted professional associations, always we are open to that.

DE: But with the previous model, they didn't respond as much. Now you're seeing a lot of that activity?

HO: Totally different attitude. And also the feedback from professional photographers, [after] they purchased, [we've had a very positive response to the E-M1 II].

Compared to the original E-M1 (shown), the Olympus E-M1 II is improved in many, many ways. See our full review of the E-M1 II to get a feel for its advantages!

There have mainly been three points: One is the AF performance, the speed of the autofocus, and also the IS performance, and also the lens quality. The combination of those good performances and the compact, lightweight size of the E-M1 Mark II is something that has [really motivated purchases].

As I said, our current purchasers are mainly Olympus users, but from now on, we are really expecting also the current DSLR professional photographers and also the enthusiasts taking notice of the performance potential of our product.

DE: I think having a pro service organization is very important for the professional market. I know you've recently launched a similar program in the US market, offering special service arrangements with a membership, where you can get loaner units and that sort of thing.

HO: Of course we agree on the support of the professional that we need. But on the other hand, the product itself is really, really robust. Together with the dustproof and splashproof [design], and also the dust reduction system. We are the original manufacturer of the [dust reduction] technology. It [requires] less maintenance and repair service for not only the professional, but also the enthusiast. It's a very robust camera; the construction helps.

DE: That's true. That's a comment that my editor who went on the Iceland trip made coming back, that [the construction] was very impressive, you know, [he was shooting in] rain and splashing water.

Toshiyuki Terada
General Manager
Global Marketing Department
Imaging Global Marketing Division
Olympus Corp.

TT: We heard about the terrible weather. <laughs>

DE: It was a good trip for showing a weather-proof camera. He had the 12-100mm lens on there -- he loves that lens -- and he said he wanted to shoot with other lenses but he liked that one because it's so versatile, the pictures were so good and also he didn't have to take it off with the wind and the rain and everything. We were very impressed with that lens, the whole system is very nice.

So the next question is about autofocus. With the E-M1 Mark II everything is about speed, both in terms of image capture and autofocus. You can now do 18 frames per second with full-tracking autofocus or 60 frames per second for single autofocus. The question is where do we go from here? Autofocus speed is a bit like money: You can never too much and it's always nice to have more, but at some point you have enough. Are we getting to that point with the E-M1 Mark II, or is there really further autofocus speed to come still? And is it perhaps already as fast as it needs to be?

HO: Before answering this question, I would like to talk about how we improved the E-M1 Mark II [compared to] the previous E-M1. First of all, we are considering not only the speed of the autofocus, but also we have to consider any kind of situation to fulfil the performance of the autofocus. This is one point. Because of our good relationship [with Imaging Resource], I would like to [talk about] something with you [that we have not shared publicly before].

HO: We selected 18 different scenes. For example, [photographing birds] is one scene [type]. [In total, we selected 18] different scenes for the target for the precise, fast autofocus. First of all, among the 18 scenes, 10 are very important ones [that other camera manufacturers have covered].

TT: [after translating Ogata-san's comment:] Even I didn’t know that!


DE: Ah! Ten are commonly covered. So like a bird in flight might be one people might think of, normally.

HO: I don't like to say which scenes. But among these 18, 10 are very core [scenes]. The other eight scenes we [chose ourselves], these are more dedicated to show the difference of our performance compared to the others. To differentiate from [our rivals], we selected [these] eight [other] scenes.

For example, [for] other manufacturers, maybe a tripod is necessary [for a particular shot]. In our case, [our 6.5-stop Five-Axis Sync IS lets users shoot] without a tripod. For example [in the previous example of shooting birds] on a tripod at the exact moment [a bird] flies way, it is very hard to use the tripod to track that one bird. For that kind of scene, we have to do something different to [our rivals]. Because [to maximize] our benefit of handheld shooting it's really important to take very precise autofocus.

Olympus' Five-Axis Sync IS technology explained.

DE: Interesting! In that situation, the whole scene is moving. Like all of a sudden, everything is moving, and to be able to pick out the subject [is challenging.]

HO: For the individual 18 scenes, we scored the previous E-M1 model. [And not just a few shots], but we took maybe 10,000 or 20,000 shots [for each scene type].

DE: So very large statistics.

TT: Yes, sure.

HO: Then [we captured] the same scene with the E-M1 Mark II. The score is 1.5 times higher than the previous model. This is one of the differences we can achieve, [to have] more precise autofocus with the new E-M1 Mark II.

DE: And so it really means 50 percent more keepers?

TT: Sure.

HO: So then back to your question. Still we are not satisfied with [our] current scores in each of the individual 18 scenes. So then how to reach our [desired] level? This is still [an area where] we have room to improve.

DE: And so it's not about speed, it's about being able to handle all these different situations?

HO: Mm-hmm. From this story, as a result, we need all cross-type sensor and also the 121 cross points. Wide coverage, we need. And also the continuous autofocus performance. So every aspect is coming out [in testing of] the 18 scenes, how to achieve our level.

DE: And you might see one scene you do very well at, but another scene is difficult, and so you're looking at cross points, number of points, etc.?

HO: Sure. Still we need to [keep working] to achieve our [desired] level.


DE: That's very, very interesting about the different scene types. And you say that everyone does 10, you have another eight, and that's the way you want to differentiate. But on the other hand, if they're secret scenes, then how do the users know that you're better at them? Obviously you don't want to tell the competitors, but on the other hand, it would be good to tell customers [what the scenes are.] Like you just told me "Oh, the bird taking off, that's something we will do better.

TT: Sure.

HO: Of course we cannot say the individual scenes, how we are considering [our autofocus]. But as a result, at this moment with the new E-M1 Mark II, I think that compared to the other manufacturers, our autofocus performance, the overall performance, is the highest score at this moment.

DE: Autofocus is a hard thing for us as reviewers to do much with. We have not had a formal test and now we are going to do something very simple, but very repeatable, it's going to be a guy on a bicycle zigzagging coming toward the camera.

But what you've pointed out talking about the different scenes is that it's so, so complex. A camera might be perfect at tracking a guy on a bicycle but it couldn't [focus on] a bird in the sky. It's a difficult thing, in terms of how... I've been thinking some about how we could have a forum or a place where people could comment about autofocus specifically, could talk about the type of subject they were shooting, like "It was a kid's soccer game, and this was the camera and this was the lens, and it was good or it wasn't good."

It feels like that for people to understand how good the cameras are at autofocus, you almost have to crowdsource it somehow because we can't possibly take 10,000 pictures of 18 different scenes. But that's what required. So we will do this test, but we understand too that it's a very specific [situation], and that autofocus is extremely complex.

HO: That's a really good thing. We have to somehow adapt to various different scenes. And we [have to consider] not only the speed, but any aspect of the autofocus that has to be somehow adapted to different scenes. Maybe a more intelligent system in the camera is necessary to distinguish this scene [versus] other scenes.

For example, you know we already have the face detection and eye detection. [But that's just one of many possible subjects.] Somehow the camera has to recognize what kind of subject you are taking [a photo of.]


DE: Face recognition is easy because faces always look [at least] somewhat the same.

TT: The subject is [always] different, not only people but also animals, automobile or...

DE: Yeah, animals or a sports game, or something. And how does it figure out what kind of subject it is?

TT: To achieve the autofocus' total performance for every kind of scene or subject, we have to get more intelligence in the camera to recognize the subject itself or something like that.

HO: This is a benefit [exclusive to] mirrorless: Always the image sensor captures the images.

DE: The image sensor is always on and seeing things, yeah.

HO: As you know, in the case of the DSLR, if you see through the viewfinder and also during autofocusing, they don't get the image information [to the image sensor].

DE: Right, yeah.

HO: Of course, they [DSLR] can do that with live view. This is kind of already mirrorless, right? <laughs>

DE: Yeah. And their autofocus systems have really been developed with a separate sensor so they may have on-chip phase detect, but it's a question of making it fast.

We're actually talking about my next question, which is about intelligent autofocus. Because I've felt for a while that it's not how fast but making more intelligent decisions. I was thinking just in terms of being able to recognize and identify the subject so it can follow it, but in your discussion about the different types of scenes, it made me think that the scene recognition is important because that will tell you something about the likely behavior of the subject.

[For example], if it's a sports game, there might be a lot of players passing in front of my subject so it's interrupted, or as you said, the bird taking off, everything is moving. Maybe the camera starts recognizing birds and over the space of several shots thinks, "Oh, this guy is taking pictures of birds, so now I'm going to pay more attention to 'bird mode'."

So you've actually been answering my question about what your view is. It seems like we're in agreement that being able to integrate more image processing with the depth data would be important. And so we've been talking about where things will get more intelligent, and we kind of assume the next generation of processors is always going to be faster. But we're getting maybe to the limit somewhat here. How much more do you think we can go in terms of processing power? Another two times? Four times? Eight?

Olympus E-M1 II motherboard with TruePic VIII image processor.

HO: Your question is very deep, always very deep. <laughs> [I'd like to note that the E-M1 Mark II] is double quad-core. As you know, the total of eight cores that we have -- double quad-core means eight-core CPU -- the number of CPUs [dictates] the performance. On the other hand, if you have an increase in the number of cores, we need more power consumption.

DE: Exactly.

HO: Always we have to consider the power consumption, and also the power of the image sensor. How many CPUs we can have with the limited power resources. The answer for the [E-M1 Mark II] is eight cores.

The other important part is that the hardware itself is eight core. But how do you utilize the cores [efficiently]? How to manage the power consumption and how to create [a good] algorithm to utilize [the multiple cores] very efficiently. A very high-level algorithm is necessary.

[A good analogy might be] a normal [person] driving a Formula 1 car. The engine itself is very high performance, but the driver is just wasting the gasoline.

DE: So you're saying that the processor is Formula 1 and you guys are still driving a stock car, you have to learn how to drive the Formula 1?

HO: Yeah. It's a very sophisticated algorithm to utilize the highest power engine. It's really important.

DE: So it's not just talking about processing power. What you're saying is that the software architecture, there is a lot of gain there maybe?

TT: Right. It's very key.

HO: In my opinion, we have to somehow consider a balance of the image engine and the algorithm and also the power consumption of everything. My opinion is that from now on, four times [the power of the whole system] in five years [should be possible].

DE: In five years time, the whole system may be able to do four times as much?


HO: Yes, in my opinion, during the next five years.

DE: I have wondered if it at some point we will start seeing rather than a general-purpose CPU, if some of these vision algorithms will start being mapped into hardware. So rather than having a general computer processor just stepping through a program, maybe some of this can get pushed into dedicated hardware, so lower power consumption and higher speed. Do you see that coming?

On the other hand, it's like the outside market, the general computer business, drives the general processors and there's not really a big market that's driving special-purpose image processing, so it may be difficult. But that feels to me like an area that at some point we will start seeing more [development].

I know actually that when Samsung was in the business, I interviewed their main chip designer and that was something that he was working on, was that their processor had a lot of special-purpose hardware units to do different low-level functions. But unfortunately -- or fortunately for you guys -- they left the market. But it was interesting to me that that was a direction that they were taking, to try to make [special-purpose hardware].

HO: For example, 4K or 8K video needs more power consumption, right? To manage the power consumption and also handling such kind of huge data, the dedicated ASIC is necessary from the power consumption point of view.

DE: Like H.265, yeah, in hardware?

HO: Yes. On the other hand, the new technology, like AI or deep learning, somehow the programmable type of hardware or general...

DE: Ah, there would be dedicated hardware to implement neural networks and things, and so maybe driving that technology, that hardware, then that can be applied to cameras.

HO: And also the combination with the dedicated ASIC hardware and also more general FPGA type of programmable [circuits would give] more flexibility. Some sort of combination is necessary.

DE: It seems like even with FPGAs it would be expensive to develop, I think.

TT: Yeah.

DE: Much more expensive than traditional...

HO: It's really key to manage the power consumption and also the cost issue.

DE: Yeah. FPGAs are expensive chips. Very interesting. Still though, five years from now, even four times more processing will be a big step from where we are.

HO: I hope to try that.

The Four Thirds-format image sensor which sits at the heart of the Olympus E-M1 Mark II.

DE: So a similar question in the long term is how much more really can sensors improve in terms of noise and sensitivity, because we're already very close to physical limits. I don't know what the quantum efficiency is, like on average how many electrons you get per photon that comes in, but at the highest sensitivities, what drives the noise is the shot noise. It's the random variation of how many photons happen to land, so it seems like we can't really go a lot further in the sensors. Is that true, do you think, or are there any theoretical ways to extend sensitivity, maybe like little tiny photomultiplier tubes?

TT: <laughs>

HO: You should ask the sensor manufacturers.


HO: We [have] a good relationship with sensor manufacturers from a technical point of view. We are exchanging opinions and [having] technical discussion and I think that now [regarding] the noise level, we are coming close to the physical limit. But on the other hand, we are discussing about getting more intelligence in the image sensor itself.

DE: I can see that the other dimension we could go in is time somehow. We want to have very high sensitivity and short exposure. Like Sony was the first with the multi-shot mode where it would add up but then if the subject moves, it doesn't work. But maybe if we have enough processing, we can figure out where the subject is in each frame and add it up. That's interesting though, the idea of adding more intelligence right in the sensor.

HO: You know very well.

DE: So another question relative to image quality and sensors is, could we significantly extend dynamic range with current sensor technology just by having a deeper A-to-D conversion. A 16-bit ADC or even 18-bit, and then you just take the picture and it doesn't matter if it's daylight, you could go down ten stops and pull it out still.

HO: [Regarding] the bit depths, we are always considering the output, how we have to use the output itself.

DE: Oh, you have the fixed dynamic range of output.

HO: Of course, without [the need to consider this], bigger is always better. But actually we need the output device, this is a kind of limit. Compared to this story, we are always trying to make the bit depths which are more suitable.

If we see the display [market], 4K or 8K display [market], the dynamic range is getting wider and wider. Also the color space is wider. We are looking for that kind of direction and [considering] that kind of output, what kind of processing bitrates and bit depth we need. We are now just considering.

Sensor-shift image stabilization in the Olympus E-M1 II.

DE: Another question is, what's next for image stabilization. I mean, we're at a crazy level with what you guys can do now, it's just insane, six and a half stops. You said you noted that the rotation of the earth is now a limiting factor. And I was thinking, "Oh, you have GPS and a compass, and now you can compensate for the earth too."

HO: Yeah, our engineers [told us the same].


DE: At any event, we're to the point where the subject movement is a bigger factor. It was at last year's CES that I shot with the E-M1 and the M.Zuiko ED 300mm f4.0 IS PRO and I was taking shots and everything was sharp, but the singer on stage was blurred because he moved a little bit. But I'm wondering what comes next? We can't go a lot further there, is it just that this really cutting-edge technology with propagate down to lower-end models down the line?

HO: Basically, the top-end performance should have some effect on lower level.

DE: It should have some effect, yeah. So it may not be the same all the way down?

HO: But the very important point is that every camera [in our] lineup has a very important concept. If we ask of [a stylish] model the same performance as [another model], maybe it will be [thicker] or it will be bigger or something like this, and it will destroy the concept itself. Then we don't like to take this kind of direction.

DE: That's a good thought because for more stops, you need more movement, and so you would have to have a bigger body to be able to move the sensor as much probably.

Another question is whether there is any chance for cameras built around 1-inch sensors. That has been a very popular category in the market.

TT: You mean compacts?

DE: Compact cameras, yeah, not interchangeable-lens but all-in-one compact, such as the Sony RX series and Canon G7X. I know you can't talk about specifics, but does the 1-inch sensor compact camera make sense to you? Or are you not really focusing on compacts anymore? How does it fit?

HO: As you know, we have a broader range of the Micro Four Thirds system from the professional model to the consumer type together with a variety of lenses. First of all, the [potential] end user of the 1-inch sensor, we can cover [them] with a Four Thirds [sensor].

DE: With a very compact Four Thirds-based camera.

Like most manufacturers, Olympus has now pared back its compact camera offerings in the light of the cameraphone revolution, and now focuses solely on its Tough-branded ruggedized cameras.

HO: As you know, we have the compact lineup. Still we have the Tough series. That kind of concept, we cannot achieve with [an] interchangeable lens system, that's the reason we are still keeping that range of the compact camera with the Tough line.

If you ask us [if we will use] the 1-inch sensor for the Tough model, we cannot say yes or no because of the size benefit it has or not, we have to consider. We have a lot of choice for the Tough series...as you mentioned also in your question, there's [one possible] direction to reduce the resolution itself. We have 16-megapixels with the TG-4, but we may reduce the number of megapixels because of the more high sensitivity or something like this. Or, of course, we may have a bigger size of the sensor. We have a lot of choice, but the important part is to keep the Tough concept.

DE: A sensor as big as one inch in a Tough-series camera would be very difficult. You can't have a lens sticking out.

TT: Sure, that's right.

HO: Anyway, we are already planning for the successor of the next model of the TG-series... We cannot say more. <laughs>

DE: Oh, you can't [say more], yeah. He's probably burning to say "Oooh, oooh, if he only knew. If I could just tell him!"


So there's a question from one of my editors about the lens roadmap. He wanted to know if there was any chance of a really long telephoto zoom because you have the 40-150mm, which is phenomenal, that's another amazing lens, and then of course you have the 300mm f/4. Panasonic went the telephoto zoom direction more with the 100-400mm lens. Is that anything you would consider for your line to the extent that you can say anything about what's coming?

Olympus' M.Zuiko ED 40-150mm f2.8 PRO lens.

HO: Before talking about a future roadmap, I would like to touch upon the 12-100mm f/4 lens. It's a really good lens that has a very good reputation. I didn't think when I planned for this lens [that it would earn such a] huge reputation.

DE: You didn't expect that you would get the reaction you did?

HO: Yeah. Also the same story about the 25mm f/1.2. There are two reasons we can achieve such a very nice performance and also good reputation in the market. One is the design technology. The other is manufacturing technology in our factory. Two years ago, if we tried to produce [lenses] like this, we couldn't do that.

DE: Your technology, both design and manufacturing, has advanced that much that this is possible [now and wouldn't have been two years ago]?

HO: So from this story, once we have such kind of very wonderful design technology and engineering/manufacturing technology, we can provide very nice performance of any kind of lenses. As you can imagine, the long telephoto area and also the wide, fast prime lenses.

Then what is more important for us is what kind of expectation our users have. That is very important. Of course, our R&D resources are limited; where should we go, which should we prioritize in the next one or two years with our lens lineup? This is [all I can say for the time being]. Of course we have more concrete plans. <laughs>

I am very impressed by our engineering and design technology from those two lenses, the 12-100mm and 25mm f/1.2. I am very proud of our engineers and the technology itself. I like to say to our young engineers, "What kind of lens would you like to have? You can do it." <laughs>

DE: So a reward or something that they can design a lens? That's interesting that the level of technology has advanced so much in two years and that's why the 12-100mm is so good. And so that means that whatever kind of lens now that you will have very high quality and maybe it opens up other possibilities.

HO: Other lens manufacturers have asked me 'How can you achieve that at that price?' They were very surprised when they saw these lenses.

Olympus' M.Zuiko ED 12-100mm f4.0 IS PRO lens.

DE: I think [the 12-100mm f/4] is just a perfect lens. I mean nothing is perfect, but it has a wide ratio and so you can just leave it on the camera and the image quality is good enough that you don't feel like "I've got a vacation lens on my camera." It's not like you are accepting inferior or bad quality in exchange for being able to have that range. And it's not real small, but it's not so big that it isn't a good balance. And it's weather-sealed.

Well, I think that's all the questions we have time for. Thank you very much indeed!