Fujifilm Q&A, July 2022: NO problem with punch-in focus, and a lot more!
posted Saturday, September 3, 2022 at 5:39 PM EST
After a nearly 2½ year absence due to COVID lockdowns, I finally got to return to Japan in late July of 2022. It was great to be back and reconnect with so many people I know in the photo business over there. One of the first meetings I had was with Fujifilm, at their Omiya headquarters north of Tokyo.
I had a wide-ranging nearly 90-minute discussion with five Fujifilm managers and engineers, with topics ranging from market conditions and supply-line issues to details of the stacked-CMOS sensor in their new X-H2S camera, their new processor chip, autofocus development, how you make a zero-blackout live-view viewfinder system, plans for the GFX product line and at least a little info about new lenses.
Importantly, they also explained that reports of problems with "punch-in" focus in video shooting are incorrect, inasmuch as they have no relevance to any products that actually shipped to end users.
There's a lot here (almost 8,000 words of text), but I've grouped everything under subheads, so you can easily skim and quickly find the parts you're interested in.
IMPORTANT NOTE: Be sure to check out Fujifilm's Fujikina event coming up in New York City on September 10th. Fujifilm may be making some very significant product announcements there, answering a number of questions about upcoming products that I raised below.
This is now the second article I've published based on my recent visit to Japan, and I have at least another four, possibly six more coming, so stay tuned! (My first article was a photo tour of SIGMA Corporation's stunning new headquarters building in Kanagawa, Tokyo. It's worth a look just to appreciate how beautiful a corporate workspace can be.)
I met with five Fujifilm managers and engineers:
Yujiro “Yuji” Igarashi, Divisional Manager, Professional Imaging Group, Imaging Solutions Division
Makoto Oishi, Manager, Product Planning Group, Professional Imaging Group, Imaging Solutions Division
Jun Watanabe, Product Planning Manager, Product Planning Group, Professional Imaging Group, Imaging Solutions Division
Kuniko Åo, Manager, Imaging Solutions Division.
Shu Amano, Manager, Imaging Solutions Division
All contributed extensively to the discussion, so many thanks to Fujifilm for helping me sort out who said what during our meeting: I wasn't sufficiently familiar with their voices to be able to do so from the audio I recorded.
Without further ado, let's dive into the meeting! Enjoy!
Fujifilm’s professional imaging division is doing very well
RDE: My sense is that Fujifilm’s cameras and lenses are doing very well lately. Can you comment on the overall state of the market and your place in it?
Yuji Igarishi: All the newest technologies have been incorporated into the mirrorless cameras, so the average selling price per camera is still increasing. The number of cameras being sold is declining but the price is increasing so the overall value is going up. So that’s something that has been happening this year.
RDE: So the total cash value of sales has been increasing, hm! That’s a good thing.
Yuji Igarishi: It’s happening for the industry as a whole, but for us the same thing has been happening as well. In the last year, we saw double-digit growth over the year before, and we’re seeing the same thing this year as well.
Even allowing for a prior dip, double-digit growth is impressive in the current market.
RDE: So the sales in 2020 were down from 2019 due to the Corona pandemic, how much did it drop?
Igarashi-san didn’t have the specific numbers at hand during the interview, but I looked up their financial reports online later.
Making direct comparisons of Fujifilm’s imaging revenue and profits across the years is a little difficult, because in 2021 they separated consumer and professional imaging. Consumer imaging covers traditional chemical-based imaging products such as photographic paper and minilab supplies and Instax cameras, while professional imaging includes both their digital cameras and the Fujinon line of broadcast lenses.
That said, 2020 was of course a down year for everyone, thanks to the Coronavirus pandemic, but looking back at their previous financial results, total revenue for 2021 was slightly above that of 2019, and profit was significantly higher. (Note though, that Fujifilm’s fiscal year ends on March 31, so the 2019 fiscal year included the early effects of the pandemic.)
Supply-chain issues were tough, hopefully now getting better
RDE: My second question of course is supply-chain issues. How impacted have you been by global supply-chain problems, and how do you see supply-chain issues evolving over the next 12-24 months?
Yuji Igarishi: Last year was definitely tough, especially because of the components and logistics. Preparing for this year, we thought we were ready, but then it slowed down because of the Shanghai lockdown, which is unfortunate. And since the Shanghai lockdown is over, at least for now, we’ve been able to restart production and we are catching up very fast. Hopefully that will continue and there won’t be any other disruptions.
RDE: So Shanghai in particular was very critical?
Yuji Igarishi: Yes, for us that was quite a big event, because our main factory is in China.
RDE: Ah, and so Shanghai is the port that your products would travel through?
Yuji Igarishi: Yes.
Shu Amano: And also, some component parts manufacturers are in Shanghai.
RDE: So your supply chain, much of it is local to Shanghai?
Yuji Igarishi: Not all of it. A part of it is, but of course when you’re making a camera, you need everything.
RDE: Yeah, it doesn’t matter how many parts you have on hand, if one part is missing, you’re still stuck.
The initial Shanghai lockdown had lifted about 6 weeks prior to my arrival in Japan for this trip, but other lockdowns were spreading across the country. I think the severity of those has perhaps decreased a little as I write this in late August 2022, but now there’s new and increasing disruption due to drought-induced water shortages and the loss of significant hydroelectric generating capacity right as the country is experiencing an unprecedented heat wave. The net result is severe curtailment of industrial production in many areas to avoid power cuts to residential users. Given all that, it’s hard to say what supply chains are going to look like as the summer wears on.
What about inflation and the US recession? What does Fujifilm project?
RDE: The other thing that’s happening right now is significant inflation worldwide, but the US economy also appears to be headed for a serious recession. I don’t know the specific number offhand, but the official inflation rate in the US is over 8%, and that’s just the official rate. When you look at just food and housing, the actual rate people are experiencing is much higher, with high gas prices only adding to the pain. What is your forecast for the photo business in general and your own business in particular for the next year?
Yuji Igarishi: I think of course spending generally will be tight, considering the current situation, the financial status, but currently we’re still seeing fairly strong demand in the mirrorless industry, probably because of many new product launches. The new technology is providing value for customers. For us, we’re very confident as well, because we have the new sensor and processor combination. So I think we can also provide new value for both our existing users as well as new users. We’re fairly confident, but at the same time we need to be mindful of the situation, to provide the right products in the right quantities to the customers.
AF improvements and how they developed AIAF
RDE: Switching to technology now, AIAF is a big new addition in the X-H2S; it’s a big step up. How long has it been in the works? On a related note, the improved AF and AI capabilities rely upon increased processor power but does the readout speed of the stacked sensor matter too? If so, will AIAF be able to be implemented into upcoming cameras that lack a stacked image sensor?
Jun Watanabe: So we have implemented AIAF for the first time, and this feature takes advantage of the fast processing speed of the new processor. So we can’t implement it for the current models. The increased readout speed can increase the calculation speed of the AF algorithms, for example up to 120 times in one second.
RDE: So the autofocus can “look” at the subject up to 120 times in a second.
Jun Watanabe: Right. So we can increase the accuracy of the tracking autofocus performance.
RDE: I’m especially interested in the deep learning aspects of the AF system. Those would take a long time to develop, so I’m curious how long ago you started building the data set and training the algorithm.
Jun Watanabe: The number of photos used in the training set was more than 10,000 photos. The process of AI development is that first we separated the different parts of each image, for example the eyes, the face, the head and so on.
RDE: So a human had to go through every image first, saying “this is the head, this an eye”, etc.
Jun Watanabe: Yes. The second part is to use different sizes of the images and different angles, so it can detect the subject more accurately. Then the last thing was that we did field tests, to see if there were any subjects that were hard to detect, and we’d do more training for those subjects.
RDE: It must be a very long process, I think - one year, two years, three years?
Makoto Oishi: I think just after the development of the last processor, we already started to investigate the current device so I think at that point we already decided to incorporate the AI processor in the new chip.
RDE: So basically, as soon as the previous processor was shipping, then you were thinking about what would come next and had some idea that it would involve AI and deep learning.
Yuji Igarishi: I think the number of images wasn’t just 10,000 images, but tens of thousands…
RDE: Yeah, I can imagine it easily being many tens of thousands. It must have been a huge amount of labor then, for someone to sit there saying “face, eye, eye, next photo… face, eye, eye, etc.” for all of the tens of thousands of images. I’m glad it wasn’t me who had to do it <laughter>
How do you test AF algorithms before the new processor chip is available?
RDE: This wasn’t part of my original questions, so you may not be able to answer it, but it just occurred to me that when you’re field testing, you don’t have the new processor yet. What kind of rig did you use to run the AF algorithms being developed in the field? It wouldn’t be a regular camera, you’d have to have some sort of engineering hardware behind it, like programmable gate arrays, that sort of thing?
Yuji Igarishi: The question is, without the actual processor itself, how would you make sure that it works?
Kunio Åo: Ah, yeah yeah, we can simulate it on a PC actually. So the processor is hardware, but we can simulate the AI processing in software on a PC. So we can see the accuracy and we can project the [eventual] speed.
RDE: Can you read out the phase-detect signals from the sensor to send to an external processor like that? I’d assume you’d need to be capturing the phase-detect information and have that be processed.
Kunio Åo: Kind of. You can see the live view feed, right? So that is enough…
RDE: Ah! - So the AI part isn’t so much about distance as it is about shape and color, the boundaries of objects, etc. So it makes sense then that you can just pipe the live view to a PC for processing.
Kunio Åo: - So for the initial field testing, they use the PC, then as they get to the final stage, they use prototypes of the actual chips to do fine tuning.
Bird AF works for frogs, butterflies, etc too, was that the plan?
RDE: It’s interesting, I read that the bird AF also works very well for things like frogs, dragonflies, butterflies, etc. Did you train on any other kinds of subjects besides birds,, or was that just a happy accident?
Yuji Igarishi: Happy accident, coincidence.
How does better IBIS help autofocus?
RDE: I read somewhere that the new improved IBIS helps with subject detection somehow. Does it help just because the image of the subject is stable for longer periods of time?
Jun Watanabe: - In general, the IS performance won’t affect AF performance, unless there’s some shaking happening that the human eye can’t detect.
RDE: Ah, so sometimes a human looking through the viewfinder wouldn’t notice anything, but for the AF system looking at 120 frames/second, the processor is seeing jitter. So in that sense it helps.
[The X-H2S's in-body image stabilization is a healthy step up from that of the X-H1. The X-H1 was rated for 5 stops of IS improvement, but up to 5.5 stops with lenses that didn't have their own IS system built in. (Fujifilm said this was because the non-IS lenses tended to be wider focal lengths, where the body-based IS was more effective vs tele ones.) The IBIS in the X-H2S is rated for a full 7 stops of improvement. To put that in perspective, a focal length that you'd normally need to shoot at 1/200 second to get good results could be handheld all the way down to 1/2 second.]
Yeesh, relax about PASM, already… ;-)
RDE: I put this next question in because, while I know the answer, I wanted Fuji to be able to tell the market again. Seeing a PASM dial on the X-H2S, some people have been saying “Oh no, Fuji’s abandoning us! Where are the individual exposure controls?” <laughter around the table> I recall from a prior conversation with Udono-san that users needn’t be concerned about this. Could you explain Fujifilm’s philosophy about providing different camera styles for different users?
Yuji Igarishi: Yeah, that’s kind of interesting, because we did not hear that when we introduced it for the GFX.
RDE: Huh, that is interesting, that the issue never came up with the GFX. Once before, Udono-san explained to me about Fujifilm making different styles of cameras or user interfaces for different users, and I wondered if one of you could just say something about that for the record again.
Yuji Igarishi: Of course you know the answer is that we have a control style for each camera. The buttons and dials are maybe for sports or press photographers, where you need quick access to the settings, and rapid response. Whereas for some of the rangefinder style cameras, you have time to carefully check and make your settings; you can do that while you’re looking through the viewfinder and make small adjustments and things like that. So it depends on the style. So we’re not moving everything away from the dials, no need to worry about that.
RDE: Huh, that’s interesting; somehow in my mind, I tend to think of PASM as being more for amateurs, but actually it’s the professionals who need it, to be able to change modes quickly. Thinking about it, I can see that that’s the case in my own shooting. If things are happening quickly, I’ll tend to just set the camera in aperture or shutter priority, so I can control just the parameter I most care about and let the camera handle the other ones on its own. If I’m shooting something that’s not moving though, I’ll often switch to full manual mode and fiddle with the individual settings.
Fujifilm's first stacked-CMOS sensor gives 4x faster readout, here's how:
RDE: The sensor is another big change. I believe this is the first stacked-CMOS sensor in a Fujifilm camera. What can you say about how the stacked technology is used in the X-H2S? Does the stacked circuitry play any role in AF pre-processing (the cross-correlation function, for example), or is it just about buffer memory and getting data off the sensor more quickly? Does some of the A/D happen in the stacked circuitry?
Jun Watanabe: This is the first stacked sensor for Fujifilm, and for APS-C and Fujfilm X-Trans. By maximizing the sensor, we realized 4x readout speed vs previous models. By combining the new sensor with the new processor, we achieved 40 frames per second burst shooting with 120 frames/second AF calculation and 120 fps blackout-free EVF refresh, and 4K 120p video for slow motion, as well as lower rolling shutter.
RDE: I’ve always been curious about stacked-CMOS technology. Is there any actual logic circuitry or processing on the stacked chip, or is it just A/D and buffer memory?
Makoto Oishi: Our sensor doesn’t have any buffer memory. So it just maximizes the A/D converters (there are 4x more) and also the transfer circuits [readout channels to the processor]. That is why we can read faster than with previous ones.
RDE: That makes sense: With a conventional sensor, you can only read the data out from the edges of the array, but with the stacked technology, you can come down from the middle of the array and have many more readout channels.
Yuji Igarishi: That’s our design, our approach…
RDE: Right, so other people may do things differently. As a customer of the chip fab… You have a contract company make the sensor chips for you, a foundry right?
Yuji Igarishi: Yeah, we design the sensor, then have the foundry make them, just like the older ones.
RDE: So you can choose whatever you want to have on the stacked chip then.
Jun Watanabe: Yes.
RDE: There are many people using stacked technology right now, but everyone can choose what they want to put on that second layer.
Jun Watanabe: Yes.
This was interesting and very good news to me. When Sony announced the first cameras with stacked-CMOS sensors, my assumption had been that the circuitry on the stacked chip was somehow tied to the sensor itself. This had concerned me, because I thought it might mean that other camera makers wouldn’t be able to innovate and compete, because they wouldn’t be able to control their overall system architecture. In fact, it turns out that the sensor customer can have the fab stack any kind of chip they want behind the sensor. In Fujifilm’s case, they just use the second chip for A/D and data-transfer circuitry, but there could conceivably be logic and/or memory elements there as well.
How the heck do you make a zero-blackout viewfinder?
(There was a lot of back-and-forth in the conversation, as I was a bit slow to understand what they were saying. Rather than repeat all of that here, I’ll just summarize what was said.)
I’ve wondered for a while how it’s possible to make a zero-blackout viewfinder. I’d always assumed that there must be two separate signal paths, one for the EVF, the other for image capture, but couldn’t figure out how you could read out the same sensor data twice without messing it up.
The answer is actually pretty simple: The X-H2S does a full readout of the entire sensor array 120 times/second. Once the data is inside the processor chip, there are two entirely separate image-processing chains, with the output of one driving the EVF, and the output of the other used for image capture.
This is all down to the combination of the new sensor and processor. The slower readout of older sensors and their less-powerful processor chips meant that the camera had to switch between the live view display and image capture, so the EVF would black out whenever the camera grabbed an image. In the X-H2S though, the connection between the sensor and processor is 4x faster, and the processor has enough number-crunching power that it can handle both live view and image capture processing at the same time.
Why does video recorded in FLog2 have more rolling shutter?
The X-H2S has much less rolling shutter when recording video than many competing cameras or Fujifilm’s earlier models. But when you record video in FLog2, the rolling shutter is suddenly worse. What’s up with that?
It turns out to be a pretty simple explanation: FLog2 needs more bit depth than other modes, including ordinary FLog, and the A/D converters take longer to read out at that level of precision.
The A/D converters in the X-H2S have 14 bits of resolution, meaning they can distinguish 16,384 different levels of brightness, and this is the bit depth used for recording still images.
Video doesn’t usually need that level of fine-grained brightness discrimination, so most video modes read out 12-bit data, resulting in 4,096 levels of brightness that can be distinguished.
To achieve its higher dynamic range though, FLog2 needs all 14 bits, which takes the A/D converters more time to produce than 12-bit data does. This means the array has to be scanned more slowly, resulting in increased rolling shutter.
There is NOT a problem with manual focus punch-in!
One publication noted in an online review that the live view image got a little wonky when using punch-in zoom to focus manually. It turns out this is entirely incorrect, at least for any cameras that ended up in users’ hands.
“Punch-in” focusing means you can hit a button to zoom the live view image into just a small part of the scene when you’re focusing manually, to help see whether the image is sharp or not. (Fujifilm calls this feature "Focus Zoom".)
It had been reported that the zoomed punch-in image on the X-H2S had some areas of the live view image looking sharp, while others were blurred slightly. This would obviously be an issue if whatever you were trying to focus happened to be in one of the soft-looking areas.
It turns out this report was simply the result of the reviewer not checking with Fujifilm to see whether what they were seeing was something fundamental or just the result of a beta-level firmware bug. It was the latter, no production units have this issue.
I have to say I was a little shocked that something like this would be reported without checking with the manufacturer first, and concerned for what it might mean for reviewers being able to get early access to samples of new camera models.
From day one at IR, we’ve always been extremely careful to make sure that any issues we’ve seen in prototype samples are actually representative of production samples or if they should instead be chalked up to early-firmware teething issues. While it’s often frustrating and time-consuming to have to chase down firmware bugs (to the point that it sometimes delays posting our reviews), we always want to make sure we give an accurate picture of what a camera can do and how it will behave in the hands of real-world users. I often griped to the team that it takes 4x longer to test a bad camera than a good one, simply because we always double- and triple-checked any negative findings to make sure they were real issues and not just a case of “beta blues”.
Besides being a disservice to the community as a whole, reporting on bugs in beta firmware may make manufacturers think twice about handing out review samples with non-final firmware. This would be bad for everyone, albeit perhaps not as bad as having incorrect information floating around the internet.
=> For any gear reviewers who might be reading this, I implore you to always check with the manufacturer first before publishing negative comments about issues found in cameras with beta firmware. The consequences of not doing so will be bad for everyone, reviewers, consumers and manufacturers alike.
When is the remote-shooting app going to be updated/fixed?
Fuji’s remote mobile app has been a pain point for a lot of users. It apparently works better with the X-H2S than some older models, but I came across a number of complaints online. Some people said it almost never worked for them, while others have said it works 100% if they follow a particular sequence when connecting to the camera from Android. I asked Fujifilm about this.
Summarizing their reply, they’re aware of the complaints and said that remote camera connectivity is important to them and that they’re actively working on an update, but couldn’t commit to any ETA for when it might be released. While they said that it’s more stable and connects to the X-H2S much more quickly, they noted that they need to improve its performance with older cameras in particular.
Will the high-res version of the X-H2 use an X-Trans or Bayer sensor?
Fujifilm uses both X-Trans and Bayer color filter arrays in their cameras, so I was curious which they would choose for the upcoming higher-resolution version of the X-H2, as well as whether it would be a stacked CMOS sensor or a conventional one.
It turned out that they’d already disclosed that it wouldn’t be a stacked sensor, but wouldn’t commit to any comments beyond that, other than saying “please look forward to our X-Summit in September” :-)
(The rumor mill seems to be leaning towards it using a 40-megapixel X-Trans chip, but no one will know for sure until the X-Summit.)
What’s the story with the accessory cooling fan for the X-H2S?
RDE: The optional cooling fan for the X-H2S has drawn some comments. Some have criticized the need for a fan, but I think it’s a good option as a way to maintain a small body size for most users but not lose recording time at higher resolutions/frame rates for those who need it. What can you say about the process that led to this design and engineering decision?
Yuji Igarishi: Not everybody needs a cooling fan, that makes the body much bigger. The camera on its own is already able to shoot for a long time, probably enough for most people. That’s why we decided that the fan should be an option, just for those people who really need it to shoot for a very long time.
RDE: Yeah, for people who are shooting in really high resolution, maybe in very high temperatures, etc. Is there a specification for shooting time vs resolution vs temperature? Do you make any statements about how long the camera can shoot for with the cooling fan vs without it?
-Summarizing some back and forth, they told me that the fan will typically only be required at higher ambient temperatures, but under those conditions, it can more than double the amount of recording time before you hit the heat limit. They gave me the following numbers:
4K 60 at 25 degrees C: 240 minutes with no fan
4K 60 at 40 degrees C: 20 minutes with no fan, 50 minutes with it.
40C is really hot, equal to 104F. Twenty minutes of 4K60 with no fan seems pretty good, and 50 minutes with one is excellent. Also, while they didn’t have any spec for it, the body will cool off and be ready for the next go-round much faster with the fan operating.
Fujifilm has a *lot* of camera lines, will some go away over time?
RDE: Fujifilm has a very(!) broad range of APS-C bodies these days, with no fewer than 6 different product lines (X-H, X-Pro, X-T#, X-T##, and X-E). Will all of these lines continue into the future, or do you see some of them going away or merging with each other?
Yuji Igarishi: Currently, we believe that each product line has its own unique characteristics, so as long as that makes sense for us, we’ll continue with that line. For us, it’s whether we can provide value for the customers.
RDE: So essentially you see six very separate markets or use cases, and as long as you can address each of those with these cameras, they’ll continue?
Yuji Igarishi: Some of the products take longer to update, because it doesn’t make sense to update them every year. For example with the X-H, it took four years to come up with the next version. You know, we always think about whether a new model makes sense. If we have the technology, when we feel ready, then we’ll introduce a successor.
RDE: Right. For instance the X-E is a lower-end line, so I guess that’s not as demanding of new technology to update it. That makes sense, you don’t introduce a new model just to add a dial or move a button, it needs to be a significant advancement.
Will there be a GFX 50R-type body for the GFX 100?
RDE: Turning to medium format, with the GFX 100S already being a relatively compact camera, is there less demand or need to develop a GFX 50R-styled body with the 101-megapixel image sensor?
Yuji Igarishi: I think when we introduced the 50R, that was kind of the first small medium-format camera, so I think there was a value there. Now that we have a smaller body with the GFX100S, I think there’s maybe less need for something even more compact. Of course, we always look at the market to see if there’s a need to introduce something, but I think at the moment, probably because of the GFX 100S’s body size, there’s not as much demand for a smaller model as before.
What’s the GFX series’ market share?
RDE: The GFX series has really remarkable cost/feature/performance metrics; nothing else in the medium format market comes close IMHO. It seems to me it would be hard for other manufacturers to compete against them. (I’ve seen street prices for the GFX 100S under $5,400 US :-0) Can you share any publicly-available information on your market share in that segment?
Yuji Igarishi: We really don’t have any data to support that. Also, for us, we haven’t just targeted the medium format market, because that’s quite small as we all know. So we’ve been looking at the larger digital camera market in general, including the high end full-frame segment. So we don’t really look at it that way, but of course, at the same time, the studio b2b type of business is also very important for us, and I think we are targeting those segments as well with our camera. The reputation we have, through word of mouth and things like that have helped us get into those markets as well.
How about a medium-format body tailored more to the studio/commercial market?
RDE: Perhaps because of the difficulty in competing with Fujifilm head to head, Hasselblad and Phase One seem to be focusing on somewhat narrower markets. Both notably emphasize tethered capture, Phase One especially so, with options for wireless, ethernet and USB-C, and software that runs on both desktop and tablet platforms. Both also offer 16-bit A/D conversion for greater dynamic range. You just alluded to it, but what’s Fujifilm’s view of the studio market? Could there be a product in the future tailored just for that market, or is it too small a segment to create something very specifically targeted to it?
Yuji Igarishi: Probably not a camera just specifically [for that segment], but for us the fashion and commercial market is very important, and the core image quality of the GFX enables us to capture that market for sure. So we still are very interested in that market and I think we actually are in that market already.
What’s going to be new about the updated 56mm f/1.2 X-mount lens?
RDE: Switching to lenses, the most recent X-mount lens roadmap that I saw showed a new 56mm f/1.2 model. It looks like this will replace the current non-apodizing 56/1.2, is that the case? The existing 56/1.2 was already such an excellent lens, what can users look forward to in the new version?
Yuji Igarishi: Yes, with this also we can’t say much, but it’s the things which our end-users have been requesting, that they wished the existing lens could improve even more. And obviously because of the higher resolution demand in the future, image quality will become even better.
RDE: So it will have a smaller circle of confusion, a sharper image? Of course, for you to tell me what the users have been asking for, you’d be saying there are problems with the current lens, so I guess I can’t ask that <chuckles>.
Yuji Igarishi: I mean it’s not a matter of any problem, it’s just their wishes, a wish-list.
RDE: I don’t have written down here what the time frame was that was projected for that; what did you have shown for that on the roadmap?
There was some discussion here; the roadmap says just “2022”, but it’s not clear if it means calendar year or fiscal year. The upshot was that they said people wouldn’t expect “2022” to mean their fiscal year, ending on March 31, 2023, so we can expect to see it sometime this year.
The new 150-600mm looks amazing, what’s the story behind its development?
RDE: Both of your new XF zooms (18-120 f/4 and 150-600 f/5.6-8) seem like very significant products to me. The 150-600mm’s lens diagram looks like a tour de force of optical engineering, with 24 elements in 17 groups, 3 ED and 4 Super ED elements. Its MTF charts look very impressive, and at only 1,600 grams and $1,999 US, it’s surprisingly lightweight and the price is in-line with similar offerings from Sony and Canon. It also fills a much-needed space in your lens lineup, with 50% more reach than your 100-400mm f/4.5-5.6. There must be quite a story behind its development. What did the optical engineers find most challenging about its development, and what new technologies or new manufacturing methodologies made it possible?
Jun Watanabe: Other telephoto zooms in the market [with a] 600mm focal length weigh around 2kg. So we targeted a 20% weight reduction. That meant the main point was the aperture; we chose f/8 at the tele end to reduce the weight. With the new sensor and processor, we can do accurate AF with good performance at that aperture, and our high ISO performance is also very good. So we chose f/8 as the maximum aperture at the tele end to achieve good size and weight.
RDE: Ah, so your weight target drove the choice of aperture. As I said, it also seems like a very advanced design, with so many elements, so much ED glass and that sort of thing. Are there aspects of that, perhaps the Super ED glass, that wasn’t available previously, but make such a lens possible now? Were there advances in manufacturing technology that helped enable it?
Makoto Oishi: No, we could manage the assembly with the existing lens technology.
Kunio Åo: That’s why we used so much ED glass, for compensation.
RDE: Ah, so all that ED glass helped you produce it with the current manufacturing technology?
Jun Watanabe: Yes
This was interesting to me; I had expected that they would have needed new manufacturing technology to get adequate yield on a lens with so many elements that needed to be precisely aligned, but it turns out they could stretch their current tech by using so much ED glass. As to size and weight, I shot the image above by myself, and didn't have a tripod with me so couldn't show you how it looks in the hand, but the X-H2S body it's attached to gives you some idea of the relative size. Since it's an internal-focus/zoom design, it also doesn't change length when you're using it; it's always as compact as it appears in the photo. The photo doesn't really do it justice though, and of course it gives no idea of the weight. The 150-600 is impressively light for its focal length range, even allowing for the relatively small maximum aperture. The other thing that struck me when handling it was how short the zoom ring's throw was; it's very quick to change focal lengths. Fuji clearly hit their mark in terms of size and weight; it's far from tiny, but this lens would be much more comfortable to handle over the course of a long day's shooting than its competitors.
PDAF at f/8, and some thoughts about computational photography…
RDE: Your comment about your cameras being able to focus well at f/8 points to an advantage I think on-chip phase detect has over older SLR systems, namely that you can easily vary the baseline for your PDAF focus points, and the image sensor pixels are also very finely spaced. The combination lets you accurately focus even at small apertures. Is the PDAF readout part of the same sensor scanning you use for live view and image capture?
Jun Watanabe: We can read out the image data separately, one for phase detect AF, and another for live view.
RDE: This wasn’t part of my prepared questions, but it just occurred to me to wonder: Would it ever be possible to read out that phase detect information to do post-processing on the images, for things like artificially shallow depth of field, etc?
Makoto Oishi: Like Sony? Sony has a display of phase detection information in the live view display.
RDE: I wasn’t thinking about just displaying it in the live view, but rather about being able to record it for some sort of post-processing on the computer. The user could say “ah, this is the subject I actually wanted” and the computer could apply a convolution operator to sharpen it. Really though, the question isn’t so much about what to do with it as it is whether the camera’s architecture would support reading out all the phase data.
Makoto Oishi: Our sensor doesn’t have all-pixel phase information.
RDE: Ah, yeah - not every pixel is an AF pixel.
Makoto Oishi: Yes, not like Canon. We could store the phase information for each AF area though.
RDE: That’s interesting. I don’t know, it might be a solution in search of a problem, but I think it would be interesting to see what people might be able to do with it.
How about the new 18-120mm f/4? How did you implement its constant-focus ability?
RDE: The new 18-120mm f/4 is also very interesting to me. First, $899 US for a constant-aperture zoom in that focal length range seems like a very good price. Also, while I’m sure it’s a great all-in-one lens for still shooters, it really looks like the ultimate video lens: Compact and lightweight, broad zoom range, stepless aperture, variable-speed power zoom, little or no focus breathing. And it’s not only constant-aperture but apparently also a true zoom lens as well, in that the focus doesn’t shift when the focal length is changed. I’m curious, is the focus tracking across focal lengths done via mechanical cams, or is the positioning of all the lens elements done electronically?
(Another summary of a long back-and-forth, before I finally understood what they were saying)
The new 18-120mm seems to function as a true zoom lens, in that the focus doesn’t change as you change focal lengths. Almost all “zoom” lenses these days are actually what are called “variable focal length” lenses. That is, when you change the focal length, the focus shifts as well. It’s tricky to keep the focus constant while changing the focal length, often requiring complex cam systems to move multiple elements in concert with each other. The complexity is why most modern zooms are actually variable focal length.
As I suspected, the 18-120mm is actually a variable focal length design optically, but the lens itself and the camera work together so it acts like a true zoom from the user’s viewpoint. I’m not sure of the actual mechanical arrangement; they at one point described it as a “pivot system”, which I couldn’t make sense of, but later explained that there are shafts moved by an electrical actuator. (I imagine either a linear stepper, linear motor, voice-coil actuator, etc.)
RDE: So does the lens processor itself just move the focus group a fixed amount based on the change in the focal length, or if is it a closed-loop process that involves the autofocus system instead?
Shu Amano: I think it’s in-between. The camera recognizes the distance to the subject, and when the zoom is changed, the camera detects that and moves the focus element. But at the same time, the camera is always doing AF tracking. So it’s something in the middle. So, from this model, they do this very frequently, compared to the previous model.
Jun Watanabe: We can manage the pivot system, and the communication speed between the body and lens is roughly 10x faster, so we can achieve fast and precise changing of the position. And also for the tracking autofocus.
It appears that the camera is involved in the process, and an enabling factor is that the communication between this lens and the camera body is close to 10x faster than in previous lenses.
I didn’t think to ask during our meeting, but in a subsequent email, Fujifilm told me that this enhanced performance is in fact the result of the X-H2S and 18-120mm working together, taking advantage of the higher-speed lens/body communication that they both support. To take full advantage of the lens’s AF speed, you’ll thus want to pair it with an X-H2S body.
Is it true that the 18-120mm was designed by broadcast/cinema optical engineers?
RDE: A Fujifilm video mentioned that the designers of 18-120mm previously worked on the (renowned) Fujinon broadcast TV lenses. Can you describe how their experience designing Fujinon TV lenses gave them unique insights or a unique approach in designing the 18-120mm? How was their approach to the design of this lens different, compared to more conventional XF optics? Is the optical formula significantly different from other XF zooms?
Jun Watanabe: First, we had many interviews with users and many discussions with our TV/Cinema lens design team. From that, we fixed the specifications for the operability.
RDE: So it was a very interactive process with the users and the R&D team from the TV/Cinema area. It wasn’t necessarily that a person who had designed Fujinon lenses before came and had a different approach, it was that you were interacting with R&D people there.
Jun Watanabe: The team was half TV and half still photography designers.
RDE: Ah, so it wasn’t a matter that it was one designer, there were many people working on it together?
It turns out that digital cameras and broadcast/cinema lenses both fall under Fujifilm’s Professional Imaging Group, so they’re fairly closely connected. Lens designers tend to mainly work on either broadcast/cinema or still-camera lenses, but for this product, the design team was about evenly split between the two groups, and there apparently was more consultation with R&D engineers from the broadcast/cinema area than in the past. I’m not sure how this actually played out in the design process, but it makes sense from the standpoint that the 18-120mm’s feature set seems very focused on the needs of videographers.
Any more details on the GF-mount tilt-shift or GF 20-35mm lenses?
RDE: My last question is about the GF tilt-shift lens. It’s been mentioned that one is coming, can you share any information about what focal length/aperture users can expect?
Yuji Igarishi: Well, we don’t have much information to disclose at the moment, but we can say that we’re working on it.
Makoto Oishi: If you look at the lens roadmap, you can maybe imagine what the positioning means. That’s all the information we have.
Duh - the red dot on the GF roadmap for the tilt-shift lens falls exactly in line with the current 30mm prime, so it’s a safe guess that it’ll be a 30mm focal length :-)
RDE Similarly, is there any information on the max aperture of the GF 20-35mm zoom lens that you could share with our readers?
Makoto Oishi: Not for disclosure…
Any comments for the community?
RDE: That’s the last of my prepared questions, but before we end, are there any statements or comments you’d like to share with the market? What would you most want people to know about Fujifilm products that they might be aware of?
Yuji Igarishi: That’s a difficult one … <pause> … People are concerned about us going to the mode dial on the X-H2S, but they should know that the technologies incorporated in that camera can also be applied to other models, like AI, subject recognition and things like that. It’s probably not necessary for cameras like the XPro to have that very fast tracking, but the AF technology is of course going to be helpful. So the technology can be applied to other cameras as well, and we’re always thinking about Fujifilm’s total offering. We’re always thinking about [all the] people in the market, so users don’t need to be concerned about our direction just based on one or two products. At the same time, we appreciate that people are so interested in our products and concerned about them. In general, we’re very appreciative of people’s interest in our products.
RDE: I think that’s a good statement, to not get so concerned about a mode dial on one particular camera, or some other feature that appears on a particular model. The bottom line is that the technology you develop will be available for other products over time.
Yuji Igarishi: Yeah, and I think the X-H2S is such an important camera for us because it enhances our field of photography. I think that some people (such as our dealers) were concerned that they couldn’t recommend our X-Mount system to everyone, because there were certain limitations such as for wildlife photography or bird photography or sports photography. So there was maybe a tendency to think that Fujifilm cameras were most suited just for people who love color, high image quality and things like that. But starting with the X-H2S, I think we can say that everybody can appreciate X-Mount, whatever kind of photography they’re doing. And even if you’re not interested in sports or bird photography at the moment, eventually if you do become interested, we also have offerings for you there as well. So the X-H2S has broadened what we can offer.
This was interesting, and an angle I hadn’t considered before. It’s not just that the X-H2S has fast, smart autofocus and can shoot quickly, it’s that it fills what was a critical gap in the X-Mount lineup. People may have hesitated getting on the platform because of that prior limitation. With that check-box now ticked, people can get onboard without feeling like they might bump into some limitation in the future.
RDE: Well, I think those are actually all the questions I have, thank you so much for spending so much time with me to answer them!
All: Thank you too.