Did the EHT project ever try to image a reference object that was simultaneously visible with a traditional telescope such as Hubble or Webb?
I find it curious that they would try to image something that hasn't been observed by telescopes, and where we don't even know it's true appearance, without first proving that the method would work for something we already know, to act as a reference.
Dr. Becky talked about this topic in her video this week. She goes into a fair amount of detail about the back and forth between the two research groups involved. She is good at giving a layman's explanation, but I was getting a little lost on the details she was getting into on this one.
At the time they announced the image they described some of the extrapolation they had to do to fill in the image [1]. IIRC I saw a talk that described some of the different methods, but I can't find it now. I recall it was about turning few frequency domain points into many spatial domain points and they used multiple different teams with different methods, which gave qualitatively similar results in the end, but my reaction at the time was that it tanked my confidence in the details of the image since I couldn't tell what was data and what was model. A first-of-its-kind-image is exactly the kind of situation where you want to extrapolate very little.
> For now, either view of the disk's real shape could be correct. Astronomers say forthcoming technological upgrades to telescopes will allow them to gather more detailed images and better constrain the area around Sgr A* and other black holes.
Anyone know the expected timeline for better data?
This is why I hate seeing computer modeling used as science. You get out of it what [assumptions] you put into it.
I'm not saying there's never a use for it, but only in areas with established information to see how it will develop (a weather forecast is a great example), it should never be used to generate the information (i.e. fill in gaps in experimentation or observation).
Also in cases where the physics can truly be modeled. Usually there are so many assumptions, simplifications, and added constants, made so that the math can work, that the model is only applicable in very specific or simplistic cases. Then when proven effective for those cases, it gets applied in the wild, and assumed correct. We have seen many simplistic models from the dos days, upgraded to windows - yet the math hasn't changed. Simplifications made for 1990 computers,
linearity for example, are used model non linear events.
Did the EHT project ever try to image a reference object that was simultaneously visible with a traditional telescope such as Hubble or Webb?
I find it curious that they would try to image something that hasn't been observed by telescopes, and where we don't even know it's true appearance, without first proving that the method would work for something we already know, to act as a reference.
Dr. Becky talked about this topic in her video this week. She goes into a fair amount of detail about the back and forth between the two research groups involved. She is good at giving a layman's explanation, but I was getting a little lost on the details she was getting into on this one.
https://youtu.be/9U6bvR6SzMo
At the time they announced the image they described some of the extrapolation they had to do to fill in the image [1]. IIRC I saw a talk that described some of the different methods, but I can't find it now. I recall it was about turning few frequency domain points into many spatial domain points and they used multiple different teams with different methods, which gave qualitatively similar results in the end, but my reaction at the time was that it tanked my confidence in the details of the image since I couldn't tell what was data and what was model. A first-of-its-kind-image is exactly the kind of situation where you want to extrapolate very little.
[1]: https://www.youtube.com/watch?v=4Ws0iPDSqI4&t=1560
> For now, either view of the disk's real shape could be correct. Astronomers say forthcoming technological upgrades to telescopes will allow them to gather more detailed images and better constrain the area around Sgr A* and other black holes.
Anyone know the expected timeline for better data?
This is why I hate seeing computer modeling used as science. You get out of it what [assumptions] you put into it.
I'm not saying there's never a use for it, but only in areas with established information to see how it will develop (a weather forecast is a great example), it should never be used to generate the information (i.e. fill in gaps in experimentation or observation).
Also in cases where the physics can truly be modeled. Usually there are so many assumptions, simplifications, and added constants, made so that the math can work, that the model is only applicable in very specific or simplistic cases. Then when proven effective for those cases, it gets applied in the wild, and assumed correct. We have seen many simplistic models from the dos days, upgraded to windows - yet the math hasn't changed. Simplifications made for 1990 computers, linearity for example, are used model non linear events.