|Back To News|
Panasonic Q&A @ CP+ 2017: The GH5 dev story, why no on-chip PDAF and how’d they make 4K affordable?
posted Tuesday, March 14, 2017 at 6:00 AM EDT
Continuing our interview series from the recent 2017 CP+ tradeshow in Japan, Imaging Resource founder and publisher Dave Etchells was fortunate to be able to meet with no less than eight executives at Panasonic for a wide-ranging discussion on the company's camera lineup and strategy.
For this particular interview, we've had to forgo our usual practice of identifying each participant individually, as there were simply too many people present to distinguish and identify the speakers' identities from our audio recordings. Instead, we have condensed the answers and labeled them as coming from Panasonic as a whole. Representing the company for our interview were Yoshiyuki Inoue (Chief of Technical PR), Michiharu Uematsu (Advisor of Technical PR), Shigeo Sakaue (Chief of Technical PR), Naoki Tanizawa (Manager of Communication Department), Akihiro Okamoto (Subsection Chief of Image Engineering), Tsumoru Fukushima (Manager of Image Engineering), Emi Fujiwara (PR/Communication Department) and Takayuki Tochio (Image Engineering). The same individuals can be seen from left to right with Dave in the photo below.
Topics for discussion included the recently-launched Panasonic GH5, its position within the market, and the challenges involved in its creation. Panasonic also revealed the reasons why it makes V-Log L capture an optional extra on its higher-end cameras, why it has so far foregone on-sensor phase-detection autofocus, and why it has been able to best its rivals in its swift adoption of 4K ultra-high def video across its camera line. And for good measure with several representatives present from the Image Engineering department at Panasonic, Dave also enquired as to what a day in the life of an imaging engineer looks like, and what's on the horizon for image processing as ever more powerful and efficient processors become available.
Without any further ado, let's get right down to the interview!
Dave Etchells/Imaging Resource: [We're curious about the] market position of the GH5; it lists for $500 more than the GH4. Do you see a shift in the target customers for that, or is it more just a matter [of that being] what it costs to put everything in there that you wanted?
Panasonic: Well I think that the target user is basically the same as the GH4, like the videographers who are [working] individually, or indie filming.
DE: Indie film makers and things.
Panasonic: Yeah. We added many brand-new functions in the camera.
DE: So it's the same market, but much more functionality and so you felt that the cost was justified?
Panasonic: Yeah. Of course, we're trying to penetrate into the more professional videographer market, but that's why.
DE: And how has the reception been to it? If you had some expectations [prior to launch], are the shipments or demand above that or below it? How has it been received?
Panasonic: So far, we have so [much] positive feedback from the market. We have been [interviewing] videographers and the media the last few years, and we [were able to implement] some great key functions for videography, like 4:2:2 10-bit internal recording, [faster] focusing speed, and limitless recording. These all-new functions are pretty [well] appreciated.
DE: Yeah. So the reception is naturally very good, because you were listening and interviewing people for years.
Panasonic: Yeah, we [received] very big preorders globally and we are afraid about [supply] capacity.
DE: Hmm, you're afraid about "Can we make them?"
DE: It's never just right, you know? It's like either you're saying, "Oh no, we don't have [enough] orders" or "Oh no! We have too many orders!"
Panasonic: It's very hard to read.
DE: Yeah. So 4K video and 4K Photo [are now available] across your entire line, including [fixed-lens] digicams. Even the very affordable FZ80 has it now. It's unusual to see 4K video in cameras less than $500, and we're curious what sort of challenges are associated with including 4K video and 4K Photo in such low-cost cameras? How have you managed to do it, but nobody else really has yet at that price point? Could it be that the processor that you developed for higher-end, you have manufactured long enough so the cost has come down?
Panasonic: [Your question is] engineer-focused. <laughs>
DE: These guys are [focused on] image quality, not video functions.
Panasonic: Yeah. But we have a long history with developing the 4K technologies, like for TV and for [recording] the videos. We have so [much] know how [for these] technologies, and then we [made a] technical breakthrough [which allowed us] to put everything in a small package.
DE: So it's really just that you have a lot of experience doing 4K and all of the engineering has been spread across a lot of other products, so you have recovered a lot of costs and can make it cheap.
Panasonic: Basically, in our camcorders we have more than 30 years of [video experience which we have] combined with the Venus Engine's still image processing, and we already introduced [4K] motion pictures. However, [for] other manufacturers, especially still-oriented manufacturers, I think it's not so easy [to add 4K capture]. It will be a big challenge for them.
DE: Yeah, so not so hard for you. But it's surprised me a little bit, too, because Sony has also had a lot of experience in camcorders, but you have had much more 4K across your product line sooner than they did.
DE: We're curious about the decision to make the V-Log L function [for some of your cameras] a paid, optional feature as opposed to including it. You'd developed it already for the GH4 and FZ2000, and so now why is it not just included in the GH5 and FZ2500 [by default]?
Panasonic: For [typical] customers, V-Log L [is] too difficult, actually. V-Log L [is especially intended] for cinema people, [as] they like a very flat tone curve. For normal people and broadcasters, I think this function is not so interesting, and if we put V-log into [the cameras] without any charge, in that case it's a little bit difficult to explain how to use [it].
DE: So it's too difficult to explain how to use. Of course, you could hide it in the menus, but [I guess] it's a capability that required some development, and you can't really recover that development cost from average users because they don't care [about the feature], it's just confusing. So to be able to invest in that feature, you need to make money from it somehow, and this is a way that you can recover that investment without making it cost more for everybody.
Panasonic: And in the future in the TV category, we and Dolby are discussing about HDR, and maybe [in the] near future it should be [possible to include this] for normal customers, I think. And also already we have 10-bit 4:2:2 [capture], two bits more dynamic range actually. And this means for even normal customers who want to get more dynamic range, they can use 10-bit.
DE: Right, yeah. So you're saying that the way that sort of technology might come to regular customers is through HDR recording, because there's a standard for that. So they'll have their TVs which will understand the HDR signal, etc.
Panasonic: Yeah. In the future, HDR will just be a standard between [your TV and] TV cameras and also motion pictures. However with V-Log L [right now], it's very difficult to manage for normal customers. And if they make a V-Log L [video], actually, [when they view it] with their TV, [there will be] almost no contrast at all.
DE: So there's some concern besides just the money. It's the fact that you really don't want normal people to be able to accidentally turn it on, and then it's like "Agh, this camera!"
Panasonic: Normal American people like [very] contrasty [images and videos].
DE: Yes, American people like too much contrast, too much color…
Panasonic: <laughs> Yeah. Compared that, [V-Log L would be the] opposite side. So we have [so much explanation to teach regular customers how to use it.] [Retailers] will hate such a situation.
DE: So another question about the GH5. It offers 4K video at 60 frames per second, and yet still doesn't have a recording time limit. We're curious about what you had to do to get that all that heat off of the sensor? And at the same time, it's a moving sensor so my mind is kind of boggled. Do you have like a big heatsink that moves around, or little fans that blow…? <laughs>
Panasonic: <laughs> Yeah.
DE: So how? How?
Panasonic: We made some improvements to the heatsink and even so, we also increased the body size a little bit. And this heat-dispersing technology did not come just from another camera, but also [from our] TVs, right?
DE: From the televisions? <surprised>
Panasonic: Yeah, televisions [are] getting thinner and thinner, and we have been developing [them] for a long time. We have a lot of background technology.
DE: Ah, so you've been working on heat-sinking, heat extraction, as you make thinner and thinner TVs.
DE: Ah, very interesting. I'm curious, how much of the heat in total comes from the sensor versus the processor? Is it 50/50 sensor versus processor, or is it [mostly] the processor? I'm wondering because I'm focusing so much on the idea that the sensor needs to be cooled, but maybe that's not where the real issue is. The sensor stands out in my mind because it's moving, and so therefore I would think you can't have a big heatsink stuck on it. Is more power actually consumed by the processor?
Panasonic: 30 percent to [the sensor], 70 percent to [the processor]. <laughs> The sensor is not so much different from [that used in the] GH4. But the processor itself [has] almost double the processing speed. And also, of course, we used the latest semiconductor technology, however, it includes much more [power], maybe more than 50 percent more.
DE: 50 percent more.
Panasonic: For our professional VariCam camcorders, they use two Venus engines. However, we use only one Venus engine, but two more cores. It's five cores. Normally we use four cores in one chip, but now it has five cores.
DE: And overall, it's a faster clock speed, too?
DE: So it sounds like there has been a big increase [in thermal output]. The sensor heat is not so much more, but the processor has to do so much more work that it's more [overall]. And [the processor] you have coupled right to the body, and so that's why the body is so much larger on this one?
Panasonic: Yes. Of course, not only for heat sink, but also [to provide room for the full-sized] HDMI, and also we put one more SD slot. [The GH5] now has two SD slots, [which] also makes it a little bigger. According to the mechanical engineer, one [very important point is that we] omitted the in-camera flash, and [so the space previously occupied by] the condenser and flash circuitry, they could use [that] space for the heatsink. [That's] very important.
DE: Ah, I see, so that's part of why the GH5 doesn't have a built-in flash, was that the designers needed the space for the heat sink and bigger connectors.
Panasonic: Yes, yes.
DE: Very interesting, yeah. And things like full-size HDMI and two card slots, those are requests from users that they wanted…
Panasonic: Yes. In the GH4, the biggest complaint [from] filmmakers [was] that it's [too] easy to break the HDMI [connector], so [we had] to change it.
DE: Yeah, that's actually one of my big dislikes [with my new MacBook, the little tiny connector] and so I plug in the power, and at night I work on my lap and the connector wiggles. My last MacBook Air also had a little mini-DV connection, and I went through two motherboards because of [excessive play in the tiny connector causing damage], so yeah, I understand. I want bigger connectors [on my products] too. Very interesting.
Panasonic: For the engineer [the smaller connector] is very convenient, but for user, it's really not so convenient [as] they're easy to break.
DE: Yeah, and users don't care [about the smaller size]. I would be happy with a slightly bigger laptop if I had better connectors, and it's the same thing with the GH5, they want a big connector and two [SD card] slots.
DE: Both the Panasonic GH5 and the Olympus E-M1 Mark II have very high burst capture rates. With 4K Photo, you have 60 frames per second, and for 6K Photo you have 30 fps, but the Olympus E-M1 Mark II can capture full resolution images at 60 fps, even for raw files. How does Panasonic view the competition and tradeoffs between very high-speed still and video? Because you're not going to have as much dynamic range when you extract images [from video as in the 4K Photo and 6K Photo modes], and there's more compression so maybe not as much detail, but also files are much smaller too. So how do you view that competition, and what are your thoughts on it?
Panasonic: So resolution will not be lost in 6K Photo mode, even [though] it is captured from video. And [when shooting videos with high-sensitivity], we can use 3D noise reduction. Noise reduction [across multiple frames]. However, [for traditional still image capture as in the E-M1 II] it's just a JPEG [or raw file]. [Noise reduction is performed] within one frame only. [6K Photo] [allows] a little better noise reduction, [but] dynamic range is a little bit lower because normally we use 12-bit [processing] for JPEG. However, for motion picture, [we use] 10-bit to [allow] more speed.
DE: Yeah. That's a very interesting point about the 3D noise reduction, that you can look at multiple frames.
Panasonic: Yeah. Also, in JPEG mode [we use] 4:2:2 [chroma subsampling]. For motion pictures, also we have 10-bit 4:2:2, however for 4K Photo we use 4:2:0. This means the color [resolution] is a little bit [lower].
DE: So when you are capturing 4K or 6K Photo, it is captured as 4:2:0, [but with] 3D noise reduction. [For standard JPEGs, it's 4:2:2.] I'm curious if the engine can record 4:2:2, why do you only use 4:2:0 on 4K Photo? Wouldn't you just record video like you normally do, or is it that there are additional other kinds of compression that happen with 4:2:2 video?
Panasonic: I'm sorry, I also have the same question. <laughs>
DE: Okay! But it's interesting, one of the main points that I can report to our readers is that the video lets you do 3D noise reduction, which I think is significant. Of course, there's nothing to prevent Olympus from doing noise reduction across frames, but they don't have the engine constructed for that, and it would be probably very time consuming [to change it].
Panasonic: Mmm-hmm, yeah.
DE: Interesting. Since we have so many image quality engineers here, I want to understand what their jobs are like, and what they're focusing on. As they're working to improve image quality, what areas are they looking at, and as processors get more powerful, what sorts of new things can we do? Are the processors and the algorithms more able to recognize objects within the scene and decide "This is obviously all part of one object, there must be an edge here, and this looks like it's flat." What are they working on, what is next and what are the current challenges?
Panasonic: So the camera recognizes where it is flat and which part of the image has more detail, and then decides how much noise reduction for which area. I think the balance is very important in picture quality: Resolution, dynamic range, noise reduction, color…
DE: Yeah, I guess when I was thinking image quality, I was thinking [primarily] of noise reduction. You have a lot [of different factors] to worry about balancing though.
Panasonic: Yes, [and] higher resolution makes stronger noise, for example, so we control the balance of detail. Or flat is less noise, but detail remains…
DE: Yeah. I know that's the challenge: To get rid of the noise but still have plenty of subject detail. I think that's something that Panasonic has done well recently! So there are four of you that do image quality. What aspects do you work on? Does one of you focus more on color, and someone else is working on noise or dynamic range, or does everybody do everything? How does the work get divided up?
Panasonic: We all work [on all aspects of image quality]. We are working on total control of each…
DE: And so, it isn't like one of you is a color specialist and someone else is signal to noise.
Panasonic: Image quality is [a balance] between color reproduction and noise levels, and tone curve also affects the color. So those kinds of technology must be managed in total, not in a separate way, so they manage the total balance. [One] team is managing the total image quality and [another] team is developing the processor to implement new image processing algorithms, and [another] team is working for AF speed or accuracy, or implementation of the Depth from Defocus technology. And others are working for precise auto exposure algorithms, or implementation of program mode [and so on].
DE: I'd hate to be the processor guys because they have to keep everyone happy! It's like the AF guys say "I want more [processor power for my] AF!", you know?
Panasonic: Yeah, yeah.
DE: But I'm sorry, I cut you off there, you were about to say…
Panasonic: [Sakaue-san] is the Venus engine engineer, [he developed the first Venus Engine more than ten years ago].
DE: Oh, you're the Venus Engine engineer!
DE: For the image quality guys, I'm trying to get a sense of what you do each day. You come into work and you do mathematical analysis of some images? Is one of you doing that, analyzing and them on a PC, and you make an algorithm and try it out? Or is someone else looking at color management equations and trying to figure out gamut compression and that sort of thing?
Panasonic: <laughs> Everything you said, every day we do. So we have a studio in our office, we test images and also we go out to shoot scenery or other things, and then think of algorithms, apply it and then test.
DE: And so you take a lot pictures or you'll make a test scene to stress some particular aspect, and then you'll look at the pictures and try to think "how could I process this differently?"
Panasonic: Yes, yeah.
DE: Does the algorithm recognize [image content] beyond just there's a lot of detail or it's flat? Do you recognize things like, "Oh, this is foliage, this is leaves," and treat that differently than if it was just, say, black and white detail?
Panasonic: We are using scene analysis [to detect] that maybe the sky area is predominant…
DE: You would recognize the sky?
Panasonic: Yeah. The color, it would be a little bit different from [rivals]. Or you detect a person in the image, in that case, you change the skin tone.
DE: Face detection and then you would do a special color management to map the skin tone more accurately when you see a face or something. Huh. Interesting. So certainly on the, from a color standpoint, you are doing scene analysis and figuring out what…
Panasonic: To some extent.
DE: To some extent.
Panasonic: Yeah. Your example of detecting a leaf or foliage is more difficult. If possible, we would like to apply those kinds of new technology, but we are [still] on the way.
DE: Yeah. That is one thing that I have been thinking about image processing, is that as the processors become more powerful and the algorithms are more sophisticated, that there will be more subject recognition or object recognition and then it'll be able to say, "Oh, those are leaves, I know what that looks like."
DE: That's maybe many, many years away. But that's interesting that you do scene analysis specifically for color.
Panasonic: Yeah, I think one unique recognition is food recognition. For example, when we go to take [a photo of] food, the camera could recognize for the shiny dish…
DE: Yeah, a reflective dish.
Panasonic: …and at this time change [the image processing for] the shiny dish with moist food, with the tea, [lots of] reflections and texture.
DE: And the camera can say, "Ah, that's udon, and that's ramen so I'm going to make them different colors." <laughs> I'm joking.
Panasonic: By using food detection mode, you can get lots of likes on Facebook. <laughs>
DE: Moving on to a question about autofocus: The specs for the GH5 say that it's faster than the GH4 for Depth from Defocus. So far, Panasonic has focused most of your attention on DFD, if you'll pardon the pun. You haven't yet done anything with phase-detect pixels on the sensor. Is there a technical limitation for on-chip phase detect that makes it less desirable or is just a matter of intellectual property and licensing things? Or do you feel that with DFD that you really don't need to have phase detect?
Panasonic: Phase detect on the image sensor will cause image defects [which] are very easily detected when the [subject] is moving slowly. For example, if the object is moving very slowly and [phase detect is used], so this interaction makes…
DE: As an edge crosses a phase detect pixel…
Panasonic: …it is not smooth, yeah. So to avoid those kinds of artifacts, we do not use the phase detection.
DE: Very interesting, because even if they're recording the light for image formation, the phase detect pixels are only seeing half the light because they're shaped differently, and so you have to do some processing to make up the difference.
Panasonic: Yeah, that's another reason, yes.
DE: And they're doing interpolation, like a nearest neighbor or median filter or something like that…
Panasonic: Right, right.
DE: …and so if you have a sharp edge coming across [the phase-detect pixel] slowly, it will flicker a little.
DE: Ah, that's very, very interesting.
Panasonic: So that's why we focused on the improvement of the contrast detection autofocus control. We have an R&D section who made research on that application of the DFD technology to autofocusing, and [the] GH4 was the first generation of the DFD technology. In the GH5 it's the second generation of the DFD technology, implementing faster DFD frame processing speeds and the number of [distance samples in the depth map].
DE: Ah, you have a… basically sort of more regions with separate depth information.
Panasonic: Right, right. So the precision of focus has improved, in the second generation.
DE: Interesting, so it's not just speed which has been improved, it's also precision.
Panasonic: Right. Total speed is [improved], so we can get more feedback from the previous frame. Moving object detection, especially, is very, very improved.
DE: Like the tracking of a moving object is improved because you're retaining more information from the previous frame, and with better X/Y spatial resolution, you can also distinguish an object that's moving…
Panasonic: Yes, and we are also using the motion data to track our object.
DE: Yes, yeah. So it's coming back, that feeds back into the autofocus algorithm and it can see the motion vector of objects at different depths…
Panasonic: Yeah, right.
DE: Wow, it sounds like a very complicated algorithm.
Panasonic: <laughs> Yeah. [The algorithm is] complicated, but the structure is so simple. The [image] sensor only puts out [one kind of] data.
DE: There's only set of data that comes out. You don't have "These pixels are my image data and these are my phase detect points," you just have image data pixels.
Panasonic: Yeah. [On-chip] phase detect has two kinds of data, and so the hardware you make may be complicated.
DE: To be able to read out that data separately.
Panasonic: Yes, yes.
DE: So we're actually about out of time, I think. Thank you so much for answering all of my questions!