Could this also be used for manufacturing lots of other microscopic things? layer by layer?
"Instead, their method finds errors up to 0.017 nm along side-to-side measures (x and y axes) and 0.134 nm when assessing the distance between the two chips (z-axis)."
Could you make some very very sensitive and tiny seismic sensors with this?
edit:
"
Arbabi also points out that this method can be used to make displacement sensors that can be used for measuring displacements and other quantities. "Many physical quantities that you want to detect can be translated to displacements, and the only thing you need is a simple laser and a camera," he says.
For instance, "if you want a pressure sensor, you could measure the movement of a membrane." Anything that involves movement—vibration, heat, acceleration—can in theory be tracked by this method.
I made a home-brew seismic sensor using something similar, a hard disk head arm assembly, a cd-rom laser (which has an anisotropic lens and four photodiodes) and a Red Pitaya used as PID, so I guess it can be done!
Ultrasensitive microphones are easy and have existed for decades. From a shotgun mic at a rock concert to a laser interferometer detecting gravity waves, the problem is the same. The issue is not detecting vibration but shedding off those sounds you dont want to hear. This new device will certainly be able to detect vibrations, but is likely useless outside of ultraquiet environments.
You also need to take into account your manufacturing precision you can achieve with fabricating the lenses, how accurately you can position the lenses relative to the work piece etc. From my cursory reading of the paper it assumes perfect lenses and positioning, and only simulates the alignment procedure. Still, a worthy paper, but as others have mentioned, not much different than the methods used for optics alignment.
Maybe I'm missing something here, doesn't this simply move the precision problem to a different part of manufacturing? Previously you had to be precise with aligning the chips, now you have to be precise with how you put those alignment marks on the chips you want to align. Am I missing something here? Or is it considerably easier to put the marks on the chips with sufficient precision?
Putting marks on the chip with high precision is much easier; that's done by the same kind of lithographic process that's used for building up all the other layers of the chip, which is generally via exposing a photosensitive layer of material with light through a mask, and they already have ways of keeping those mask layers in alignment.
But aligning multiple chips together is a different process, and while it sounds like they previously had ways to do this via simple optical inspection of those alignment marks, that's less accurate than a holographic alignment using a laser.
I think the difference is that you don't need to change the focus of the optical sensors in order to verify alignment. So you don't have to worry about movement while the focus is changed.
There's a word for this concept, discovering useful techniques that were always there but which nobody else had discovered before; we call it "invention".
I like to think of it more like a scale. Some things are closer to being invented, other things are closer to being discovered.
I'd say we've discovered pi, and the fractional quantum Hall effect[1]. And I'd say we've invented low-density parity-check codes[2] and single-photon avalanche diodes[3].
That's a bit reductive. Without a human mind putting effort to wander the concept space, that concept would never be touched, and it would never be realized. The claim that all logical things that can exist already exist since they're an inescapable eventual logical conclusion seems a bit silly.
Go back to the source (book, whatever) and read it again, it's unlikely that it said that nothing is invented and you missed the point.
E.g. electromagnetism was discovered → electric engine was invented(mayb as a result of the discovery, maybe not). You can discover how a wheel works or invent a wheel without discovering the principle.
I mean, one could say no one ever writes a book, they just discover it, since that sequence of characters (like all other sequences) was already implicit in reality.
I think this points up the problem with what you're claiming. There is sufficient creativity to get to the exact sequence of characters (or exact configuration of elements for the invention) to distinguish invention (a kind of creation) from mere discovery.
In mathematics, though, we say a mathematician discovers a proof, even if the proof is very creative. So maybe it's not as clear as all that.
Maybe the problem is the nature of constraints around the innovation? If it's sufficiently constrained there's little room for creativity, and the word discovery is more appropriate, even if it was hard to find.
Could this also be used for manufacturing lots of other microscopic things? layer by layer?
"Instead, their method finds errors up to 0.017 nm along side-to-side measures (x and y axes) and 0.134 nm when assessing the distance between the two chips (z-axis)."
Could you make some very very sensitive and tiny seismic sensors with this?
edit: " Arbabi also points out that this method can be used to make displacement sensors that can be used for measuring displacements and other quantities. "Many physical quantities that you want to detect can be translated to displacements, and the only thing you need is a simple laser and a camera," he says.
For instance, "if you want a pressure sensor, you could measure the movement of a membrane." Anything that involves movement—vibration, heat, acceleration—can in theory be tracked by this method.
"
I made a home-brew seismic sensor using something similar, a hard disk head arm assembly, a cd-rom laser (which has an anisotropic lens and four photodiodes) and a Red Pitaya used as PID, so I guess it can be done!
Oh man, this is giving me The Amateur Scientist vibes. What would C. L. Stong be doing with today's technology?
The last bit about using this same technique for sensors is pretty cool. Ultra-sensitive microphones or touch sensors would be pretty awesome.
Ultrasensitive microphones are easy and have existed for decades. From a shotgun mic at a rock concert to a laser interferometer detecting gravity waves, the problem is the same. The issue is not detecting vibration but shedding off those sounds you dont want to hear. This new device will certainly be able to detect vibrations, but is likely useless outside of ultraquiet environments.
You also need to take into account your manufacturing precision you can achieve with fabricating the lenses, how accurately you can position the lenses relative to the work piece etc. From my cursory reading of the paper it assumes perfect lenses and positioning, and only simulates the alignment procedure. Still, a worthy paper, but as others have mentioned, not much different than the methods used for optics alignment.
This is not novel in general - the same technique has been used in lens alignment for decades.
Maybe I'm missing something here, doesn't this simply move the precision problem to a different part of manufacturing? Previously you had to be precise with aligning the chips, now you have to be precise with how you put those alignment marks on the chips you want to align. Am I missing something here? Or is it considerably easier to put the marks on the chips with sufficient precision?
Putting marks on the chip with high precision is much easier; that's done by the same kind of lithographic process that's used for building up all the other layers of the chip, which is generally via exposing a photosensitive layer of material with light through a mask, and they already have ways of keeping those mask layers in alignment.
But aligning multiple chips together is a different process, and while it sounds like they previously had ways to do this via simple optical inspection of those alignment marks, that's less accurate than a holographic alignment using a laser.
I would think the alignment marks would be included in the photomasks, so they would be part of the chips themselves
Isn't this very similar to the way optical position encoders work?
I think the difference is that you don't need to change the focus of the optical sensors in order to verify alignment. So you don't have to worry about movement while the focus is changed.
Nothing is invented, only discovered.
> Nothing is invented, only discovered.
https://en.wikipedia.org/wiki/Invention
https://dictionary.cambridge.org/dictionary/english/inventio...
https://www.merriam-webster.com/dictionary/invention
https://www.britannica.com/technology/invention-technology
my opinion alarm is jumping off the desk over here, sheesh
It's a great mental exercise. Reading it, my opinion alarm went off. (Thank you for the source material!)
Maybe that's very clever or just dumb semantic nitpicking, but unless you say more no one will know which you think it is.
I think only discovered. It was always there, but someone(s) just had to do the work to find it.
There's a word for this concept, discovering useful techniques that were always there but which nobody else had discovered before; we call it "invention".
I like to think of it more like a scale. Some things are closer to being invented, other things are closer to being discovered.
I'd say we've discovered pi, and the fractional quantum Hall effect[1]. And I'd say we've invented low-density parity-check codes[2] and single-photon avalanche diodes[3].
[1]: https://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect
[2]: https://en.wikipedia.org/wiki/Low-density_parity-check_code
[3]: https://en.wikipedia.org/wiki/Single-photon_avalanche_diode
That's a bit reductive. Without a human mind putting effort to wander the concept space, that concept would never be touched, and it would never be realized. The claim that all logical things that can exist already exist since they're an inescapable eventual logical conclusion seems a bit silly.
All that said, I do mostly agree.
Go back to the source (book, whatever) and read it again, it's unlikely that it said that nothing is invented and you missed the point. E.g. electromagnetism was discovered → electric engine was invented(mayb as a result of the discovery, maybe not). You can discover how a wheel works or invent a wheel without discovering the principle.
I mean, one could say no one ever writes a book, they just discover it, since that sequence of characters (like all other sequences) was already implicit in reality.
I think this points up the problem with what you're claiming. There is sufficient creativity to get to the exact sequence of characters (or exact configuration of elements for the invention) to distinguish invention (a kind of creation) from mere discovery.
In mathematics, though, we say a mathematician discovers a proof, even if the proof is very creative. So maybe it's not as clear as all that.
Maybe the problem is the nature of constraints around the innovation? If it's sufficiently constrained there's little room for creativity, and the word discovery is more appropriate, even if it was hard to find.