« Previous topic | Next topic »  
Reply to topic  
 First unread post  | 25 posts | 
by E.J. Peiker on Fri Aug 22, 2014 11:23 am
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
... based on the superb Sony 50mp cropped medium format sensor (about a 1.3x crop to 645) and using sensor shift, they have introduced a new version of the H5D which outputs 200 megapixel files without shrinking the pixel sites:
http://www.hasselblad.co.uk/media/48089 ... _en_v2.pdf
 

by Tom Robbins on Fri Aug 22, 2014 11:42 am
User avatar
Tom Robbins
Forum Contributor
Posts: 937
Joined: 29 Feb 2004
Location: North Central Illinois
As a hard core tilt and shifter, I really like the idea of MF with the HTS 1.5 adapter. The new H5D and the adapter would be a beautiful combination. The pdf file didn't include price, which is probably just as well. It's easier to dream when blissfully ignorant of its cost.
 

by Royce Howland on Fri Aug 22, 2014 11:59 am
User avatar
Royce Howland
Forum Contributor
Posts: 11719
Joined: 12 Jan 2005
Location: Calgary, Alberta
Member #:00460
If money was no object, this would interest me a lot. :) In between putting out ridiculous trinkets (such as the Lunar and Solar) for nouveau riche of questionable taste, Hassy still does some ground-breaking stuff. This 200MP output capacity is created not by classic stitching or the like, but via a form of "super resolution" from sub-pixel shifts of the sensor. So it increases the resolution without changing the angle of view that you're seeing through the viewfinder.

A form this can be done with multiple frames from normal cameras, combined in software such as PhotoAcute. But there could be advantages to doing it in-camera...
Royce Howland
 

by rnclark on Fri Aug 22, 2014 12:14 pm
rnclark
Lifetime Member
Posts: 864
Joined: 7 Dec 2010
Member #:01978
A comment on the resolution. Pixel shifting increases sampling, but the angular resolution of a pixel is still large, and thus overlapping pixels. While it can certainly produce a better images on a STATIC subject than a single frame, it is not the same as if the true angular resolution of a pixel were higher. The static subject is a real limit because there must be at least 3 exposures (probably 4), shifting the sensor in between each exposure. I'll prefer a traditional mosaic, where on each frame, only one exposure is needed, and letting focus move between frames compensates for not having tilt (I routinely do this). It is easy to produce multi-hundred megapixel mosaics, even hand held, and windy moving subject and with low cost cameras. Just takes a little experience. The downside is post processing time.
Roger
 

by Mike in O on Fri Aug 22, 2014 12:15 pm
Mike in O
Forum Contributor
Posts: 2673
Joined: 22 Dec 2013
This sounds suspiciously like the twilight 6 frame night shots that Sony puts into their cameras. I glanced at the pdf and didn't notice a patent or exclusivity wording. The sony only does jpeg to keep file size down, but obviously the raw is there.
 

by Mike in O on Fri Aug 22, 2014 12:33 pm
Mike in O
Forum Contributor
Posts: 2673
Joined: 22 Dec 2013
Roger, I agree with your comments to a point but if Hassy is using a form of the Sony twilight mode, it actually can be used on moving subjects (obviously not fast moving)
 

by E.J. Peiker on Fri Aug 22, 2014 12:33 pm
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
Mike in O wrote:This sounds suspiciously like the twilight 6 frame night shots that Sony puts into their cameras.  I glanced at the pdf and didn't notice a patent or exclusivity wording.  The sony only does jpeg to keep file size down, but obviously the raw is there.
It's different from that.  Sony takes multiple frames and analyzes the output of each in camera and then presents you with the sharpest frame possible from those 6 exposures - the output is the same pixel count as taking a single frame.  This Hasselblad approach physically moves the sensor a small amount between each of the exposures and then puts them all together in a single file utilizing each of those pixels resulting in a much higher pixel count output.
 

by Royce Howland on Fri Aug 22, 2014 12:40 pm
User avatar
Royce Howland
Forum Contributor
Posts: 11719
Joined: 12 Jan 2005
Location: Calgary, Alberta
Member #:00460
Yes, it's certainly true that the word "pixel" doesn't mean the same thing in an over-sampled 200MP image like this Hasselblad can produce, compared to a native 50MP image straight off the same underlying sensor. Sigma has been criticized for talking about the resolution of their Foveon sensors using the word "pixel" the same way as Bayer sensor camera makers do, but multiplying the pixel count because of the way the Foveon sensor vertically stacks the receptor sites (sensels).

As we start getting into more interesting & sophisticated imaging methods, marketing is definitely going to throw some confusion into what words mean. It will take a while to sort out the reality. The benefits of over-sampling or "super resolution" I believe are real, but whether they equate to a 2X linear increase in raw pixel resolution (as Hassy claims in this case) is another matter entirely.

Some of the Hasselblad 200MP claim is going to be based on increased colour resolution by getting different readings of adjacent pixel sites with different members of the Bayer colour filter array. As with Sigma's Foveon, increased colour resolution is important and beneficial, but it's not the same as increased raw pixel resolution from having more sensels in the same recording surface area.

And yes, it will have the same underlying challenge as any other multi-frame blending approach. If something moves fast enough within the field of view, it will defeat the multi-shot procedure's ability to produce a clean file.
Royce Howland
 

by DChan on Fri Aug 22, 2014 12:53 pm
DChan
Forum Contributor
Posts: 2206
Joined: 9 Jan 2009
What is color resolution, if I may ask?
 

by E.J. Peiker on Fri Aug 22, 2014 12:56 pm
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
Royce, that's why I used the term "generates 200 mega pixel output" for lack of a better term :)

I have long advocated dropping pixel count altogether as a measure of output but rather having a new CIPA standard for resolution that all sensors are tested to for the reason that different sensor technologies really can't be compared in an apples to apples way using megapixels as the measure.
 

by E.J. Peiker on Fri Aug 22, 2014 12:56 pm
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
DChan wrote:What is color resolution, if I may ask?
16 bit RAW and 8 bit TIF according to the PDF that I linked.
 

by E.J. Peiker on Fri Aug 22, 2014 12:59 pm
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
DChan wrote:What is color resolution, if I may ask?
Oh I think I may have misinterpreted your question.  I personally don't like the term color resolution but rather prefer the term color fidelity.
 

by Mike in O on Fri Aug 22, 2014 1:19 pm
Mike in O
Forum Contributor
Posts: 2673
Joined: 22 Dec 2013
EJ, I believe it does combine shots and not just pick the best
http://www.cameralabs.com/reviews/Sony_ ... ight.shtml
By the way, does any one have an idea why the Hassy is 200 mpix with 6 shots; is it compression or are they not shifting all the pixels?
 

by Markus Jais on Fri Aug 22, 2014 3:14 pm
User avatar
Markus Jais
Lifetime Member
Posts: 2888
Joined: 5 Sep 2005
Location: Germany, near Munich
Member #:01791
Could this - in theory - be done with any sensor or this is special to the Sony sensor?
Could Nikon do this with a D810 or Canon with a 5D III sensor?

Markus
 

by Mike in O on Fri Aug 22, 2014 3:31 pm
Mike in O
Forum Contributor
Posts: 2673
Joined: 22 Dec 2013
Probably any sensor with stabilization and /or shake for cleaning could be made to do this unless patented.  The pentax move the sensor for AA filter.
 

by rnclark on Fri Aug 22, 2014 3:59 pm
rnclark
Lifetime Member
Posts: 864
Joined: 7 Dec 2010
Member #:01978
E.J. Peiker wrote: I have long advocated dropping pixel count altogether as a measure of output but rather having a new CIPA standard for resolution that all sensors are tested to for the reason that different sensor technologies really can't be compared in an apples to apples way using megapixels as the measure.
EJ,
I agree.  Please tell me more about this "new CIPA standard."

In my opinion, photographers need to stop worrying about pixels and focus on the subject ;^).

Resolution on the subject is a function of both the lens and the sensor pixels.  People are often quite confused these days by the varying sensor and pixel sizes and "equivalent" focal lengths.  But the metric for image quality on the subject has been in use for decades in imaging science.  It is the Etendue of the system.  The gory math: http://en.wikipedia.org/wiki/Etendue

It is also called the A Omega product.  I describe it here with telephoto lenses and my metrics for system acuity on different cameras:
http://www.clarkvision.com/articles/tel ... rformance/
The concepts apply to all focal lengths and all sensors.

Roger
 

by E.J. Peiker on Fri Aug 22, 2014 4:08 pm
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
Mike in O wrote:EJ, I believe it does combine shots and not just pick the best
http://www.cameralabs.com/reviews/Sony_ ... ight.shtml
By the way, does any one have an idea why the Hassy is 200 mpix with 6 shots; is it compression or are they not shifting all the pixels?
I didn't say it pics the best ;)  I said it presents you with the best possible from the 6 shots.    I should have said "creates" instead of presents you with.   But again, this is nothing at all like what we are talking about here and largely irrelevant to this topic.  Taking 6 shots, superimposing them and running an algorithm to find the sharpest areas to create a final output is simple and can easily be done today in Photoshop by just taking a burst of 6 shots, putting them all together and then using the auto align followed by auto blend.  That is very different from what is being done here and results in the same number of data points in your final image as just taking a single shot.

However you are correct that a camera with IBIS, probably a highly refined and more precise version, would have the underlying electro-mechanical technology in it to do something like this.
 

by rnclark on Fri Aug 22, 2014 4:10 pm
rnclark
Lifetime Member
Posts: 864
Joined: 7 Dec 2010
Member #:01978
Markus Jais wrote:Could this - in theory - be done with any sensor or this is special to the Sony sensor?
Could Nikon do this with a D810 or Canon with a 5D III sensor?

Markus

Yes, it can.  The first implementation I am aware of was its use on the Mars landers.  Simply by taking multiple images, and each image has some unknown pointing position that is different than other images means the view shifted slightly.  By solving for the sub-pixel alignment, the different images can be aligned into a super resolution image.  The code is published and one could in theory simply take multiple frames made with one camera of a scene (for example, 10 frames a second will record enough frames in under a half second and vibrations will make each image slightly offset from others).  One could do this with hand held images as well as tripod mounted as long as the camera is not locked down too tight.  So turn your D800 images into 100 to 200 megapixel images.

I still think traditional mosaics are easy, though it is pretty easy to rip off 10 frames quickly.  Not sure how easy it would be to get the software running for superresolution.

Roger
 

by E.J. Peiker on Fri Aug 22, 2014 4:24 pm
User avatar
E.J. Peiker
Senior Technical Editor
Posts: 86788
Joined: 16 Aug 2003
Location: Arizona
Member #:00002
rnclark wrote:
E.J. Peiker wrote: I have long advocated dropping pixel count altogether as a measure of output but rather having a new CIPA standard for resolution that all sensors are tested to for the reason that different sensor technologies really can't be compared in an apples to apples way using megapixels as the measure.
EJ,
I agree.  Please tell me more about this "new CIPA standard."
I'm saying there needs to be one, just like there are other CIPA standards to compare different things that are dissimilar. 
 

by Royce Howland on Fri Aug 22, 2014 5:34 pm
User avatar
Royce Howland
Forum Contributor
Posts: 11719
Joined: 12 Jan 2005
Location: Calgary, Alberta
Member #:00460
DChan wrote:What is color resolution, if I may ask?
Colour resolution is simply just that -- the ability to resolve colour. It's a subset of the broader phrase E.J. used, colour fidelity. A Bayer-based sensor doesn't have equivalent colour resolution at all light receptors (sensels) on the chip because the array is broken into a 2x2 matrix with 1 Red, 1 Blue and 2 Green. So the true Green resolution is 2X the resolution of either Red or Blue, but even Green is only resolved in half of the sensels in the 2x2 matrix. These R, G and B sensels are put together into a full colour image by a bunch of fancy interpolation that looks at the single colour readings on the sensels and merges them together, guessing what the real colour was to create the RGB pixels we see in the output image.

This is a key marketing focus of Sigma with the Foveon sensor because it doesn't use a Bayer array with R, G and B sensels placed side-by-side. Instead, they are stacked vertically on the chip. So each point of light reception is capturing all 3 of R, G and B (glossing over some details on how this works), and therefore a Foveon sensor delivers higher colour resolution without interpolation of colours, even though the sensor has lower actual pixel resolution.

Reading the Hasselblad marketing copy, I believe they are also delivering higher colour resolution even using a Bayer array sensor, because they are physically shifting the sensor around. With a static enough scene, they can place the R, G and B filtered sensels of the sensor such that the true colour can be mapped at each point by blending multiple read-outs of the sensor. Something a Foveon type vertical stacked array could do in a single exposure.

So colour resolution is not the same as full pixel resolution, but it's still a useful thing, and a legit area of innovation.
E.J. Peiker wrote:Royce, that's why I used the term "generates 200 mega pixel output" for lack of a better term. [...]
Yeah, some new standard in terminology would be helpful here. I think talking about the output is fair, but even that gets a bit hazy because some output is from "raw capture" i.e. sensel sites on the sensor, some is processed somehow from intermediate captures, some is interpolated (i.e. fabricated), etc. I mean, what's to stop a vendor from doing Bicubic upsizing in-camera and then claiming to produce a 200 MP output file? :) Perhaps Roger's pointer to etendue or some other optical property could be rendered down into something useable in this area...
Mike in O wrote:By the way, does any one have an idea why the Hassy is 200 mpix with 6 shots; is it compression or are they not shifting all the pixels?
It's probably to do with their algorithm for both super resolution (sub-pixel shifts to pick up increased detail resolution) and higher colour resolution (partial or full pixel shifts to place different elements of the Bayer array to more accurately measure colour at each discrete receptor point).
Markus Jais wrote:Could this - in theory - be done with any sensor or this is special to the Sony sensor?
Yes, it could be done with any sensor given the ability to shift it precisely in-camera, take multiple read-outs, and combine those intermediate frames using software algorithms. The desktop software I mentioned above, PhotoAcute, basically does super resolution with the output files from any camera right now. But there are certain games you could play if you can do this in camera at an earlier stage of the imaging pipeline, instead of doing it all in post.
rnclark wrote:In my opinion, photographers need to stop worrying about pixels and focus on the subject.
The subject matters most but I don't agree that we ALL should stop worrying about pixels, or others trade-offs in the tools. Otherwise only one single type of camera would ever have existed, and we would all be shooting it. It would be something like a pinhole film camera in a simple box with a fixed lens, I suppose. Telling a person who's targeting 40x60 prints to stop worrying about pixels isn't going to stop them worrying about pixels. While many people obsess about details that don't honestly improve their work, others obsess about those same details because they do matter in those cases. :)

The real key for each photographer should be to worry about the things that make their own work better, or which they have fun worrying about, and not worry about the other things...
Royce Howland
 

Display posts from previous:  Sort by:  
25 posts | 
  

Powered by phpBB® Forum Software © phpBB Group