Of course this is not the case, we rarely stop and think how our electronic systems have been crafted around the limitations of human perception. So to explore this issue, in this article we ask the question: “What might an alien think of human technology?”. We will assume a lifeform which senses the world around it much as we do. But has massively improved sensing abilities. In light of these abilities we will dub it the Oculako.
Let’s begin with the now mostly defunct CRT display and see what our hypothetical alien thinks of it. The video below shows a TV screen shot at 10,000 frames per second.
Our limited visual system detects changes slowly (PDF). Human persistence of vision takes effect at around 15 frames a second. This merges the lines together, creating a single image. The Oculako processes images far faster. And so sees what looks like a line racing down the screen. Such an organism might not even recognize this as a display.
Even if the Oculako can work its way around this slow update rate it still has odd clusters of Red Green and Blue dots to contend with. Humans experience the world through these three overlapping regions across the 400 to 700nm range of the electromagnetic spectrum.
Human color sensitivity [Source: Norman Koren]In the electronics world, have only developed sources which produce light at distinct wavelengths. And so we mimic the operation of the human eye, mixing light at these three wavelengths to fool our eyes into believing it is seeing a single color.
This is far from common to all animals. Dogs can only see Green and Blue. Some butterflies see from ultra-violet through to Red, but with a host of additional receptors (5 across the spectrum) giving them an improved ability to distinguish colors.
All these pale in comparison to the Mantis shrimp. The Mantis shrimp has better definition in ultra-violet than we have across the entire spectrum. With a total of 12 receptor types, it has the best vision of any known animal (though it may not use that information well).
But lets suppose the Oculako is far better than this, its receptors are spectrographic. Sensing wavelengths across the spectrum with sub-nanometer accuracy. Its high resolution eyes easily able to pick out the individual pixels in our displays all it sees is a curious ever changing mosaic of color with no discernible meaning.
Common LCDs under a microscope, as the Oculako sees them. [Source: ExtremeTech]LCD and LED displays are a little better, at least the Oculako sees a complete image but the mosaic structure persists, while screen refreshes race down the screen, merging into one another.
If the Oculako were to come across a modern DLP projector it might get quite a shock. Unlike LCDs which mix Red, Green and Blue spatially, DLPs mix colors temporally. There are some great videos explaining the operation of the micro mirrors at the heart of DLP projectors. But put simply a DLP projector is composed of an array of thousands of micro mirrors which reflect light onto or away from a surface to produce an image.
A SEM image of micro mirrors from a DLP projector [Source: Ben Krasnow]As the micro mirrors are either “on” or “off” other techniques are required to produce color and intensity variation. In order to produce brighter or darker pixels the mirrors use pulse width modulation (PDF). By flicking the mirrors on and off rapidly our eyes are fooled into thinking we’ve seen a brighter or darker image. In order to generate different colors the projector filters the light through a rapidly spinning color wheel. This produces a quick succession of Red, Green and Blue images which our slow responding eye temporally mixes to produce a seemly continuous color image.
To produce a realistic image using this process the micro mirrors flick back and forth thousands of times a second. The Oculako advanced eyes picks out each flick with ease.
But more than all this, the Oculako sees no in reason in the flatness of these ever changing mosaics. Like the so called Lytro, “light field” camera the Oculakos eyes capture both the intensity and direction of the light entering its eyes. This allows it to reconstruct a 3D representation of the world around it, far more accurately than our stereoscopic vision, much better than images produced by our fledgling 3D TVs and VR headsets.
While our displays might be incomprehensible, you might think our sound reproduction is surely better? Unfortunately this is not the case. The best human ears are limited to sounds of 20KHz and below. This is blown away by what other animals can hear. Some species are able to perceive sound ranging into the 100s of Kilohertz. The sounds produced by our speakers therefore sound low and dull to the Oculako. Natural sounds like babbling water, would also be unrecognizable, clipped as they are by our audio systems. With its advanced hearing, perhaps the Oculako even transmits complex data by sound.
Our world is likely to be a confusing place for the Oculako. It’s easy to fall into the trap of thinking that other organisms, terrestrial or extra, could view our user interfaces even if they didn’t understand them. But this little survey of the visual and audio technologies we’ve developed (and the great work done by hackers to elucidate their construction) show they are very narrowly confined to our particular set of senses.
But what of our actual attempts to communicate with alien life? The most famous of which is perhaps the Voyager Golden Record.
A fascinating artifact in itself the voyager record is similar to a normal long player record fabricated out of gold. On one side it is etched with a graphic designed to provide instruction on the operation of the record and how pits and grooves are used to store information on the disc.
As well as audio recordings which might teach aliens to speak, the record also encodes color image data. Inevitably it likely suffers from the issues described here. An enhanced sensory system (like the Mantis Shrimp), does not imply higher intelligence or the ability to easily interpret complex messages, and so the data may remain incomprehensible.
Nonetheless, it’s a very difficult problem to come up with an interspecies communication mechanism. Especially considering that we don’t know of any other sentient life-forms, what their senses might be, and we were heavily constrained on how the communication was delivered. Given the technological advances since the 1970s how would you design this era’s golden record?
Yes I think that’s right. From what I’ve read it seems difficult to boil it down to a single figure though. There’s an interesting article from NASA linked above. It seems the effect is inconsistent across the retina and also depends on the speed of the moving object.
Speed of the object, location (straight ahead, 70* off to the left, etc), and… the individual person! They’ll all make a difference. My only real point was that the article says “takes effect at around 24 frames a second.”. I’m saying takes effect around 15 (+/-). Not trying to split hairs though. ;)
Ah I see the problem with the way I wrote that now. I’ve updated the article so it’s more accurate here.
It’s just not that simple. People can percieve and recognize images that flash down to 1/2000th of a second under certain circumstances. Persistence of vision depends largely on the amount of light available.
The idea that an alien would have superior eyesight to the extent of seeing both 10,000 frames per second and discerning individual pixels on a screen is just unphysical. It can’t do everything at once, because it’s subject to the same limits of physics as we are.
I calculated it to be around 1/200th of a second based on studies by the Air Force. (In retrospect my math might have been off) Unfortunately, I can no longer find the research since I no longer have unfettered access to that library and have since lost the record numbers.
IIRC, part (though not all) of the study involved flicking images on a screen and having the individual identify the image. Some were black on white, inverted, colored, etc. A whole mix of them.
if you get a film SLR camera, like a Nikon that has 1/4000th of second shutter speed, take off the lens, open the film door and look through the shutter you still get a “full” image.
I’ve read that our eyes can perceive a single photon, I’m not sure what that would look like though!
That’s a change in brightness, and a single frame by itself (at very high speed), can often be perceived after the frame has occurred, but the brain can’t instantly recognize what what seen, so stringing together two very different images at high speed just results in the brain combining them together. The result is the one full frame the brain did recognize being blurred / color shifted / changed in brightness by the frames the brain did not fully process.
If a camera can do it, it’s physically possible. Admittedly cameras can’t do high-speed, high-sensitivity, and high resolution at the same time. But some are switchable. Or the hypothetical alien could just have one eye for each type of vision.
We might be able to see an image flashed for 1/2000 of a second, but we couldn’t see two, one after the other, at that speed. It’s still only 15 or so new images per second.
How does that compare to our ability to spot flicker in moving objects? I’ve noticed that some traffic lights in the UK now seem to flicker, though I only spot it when I turn my head and it leaves a dotted trail of POV. Anyone else noticed this? I spot it most on the pedestrian signals. I assume that they’re flickering much faster than 15hz… Possibly 50, as that’s our mains frequency, and a cheap Ac/dc converter is the only reason I can think of to design them With a flicker.
When you turn your head, you’re using different retina cells in your eyes. Each cell gets a pov with 24Hz. But if you use different cells, your retina can spot different flashes since the flashes aren’t saturating the same cell over and over.
A full AC cycle consists of a negative and a positive half-cycle, and when you rectify it into DC it turns into twice the amount of positive pulses. Very simple LED bulbs simply have one string of LEDs going one way for the negative and another string going the other way for the positive half cycle, and they’re blinking in turns.
You could add an electrolytic capacitor to smooth out the pulses into a ripple, but that would be the first thing to break so they just don’t.
Really 2 strings of LEDs, each one only lit half the time? Would be much better to use a rectifier bridge, even without smoothing, at least you’d get 100% LED use.
Probably 100Hz, as they’ll be a bridge rectifier + LED installed in place of an old incandescent bulb and run off mains power, at least that’s what seems to happen in Australia. The 50Hz flicker with 50% duty cycle that you get from running an LED directly off AC is really obvious and ugly; try it sometime.
The interesting thing (to me) is that now we have LED bulbs running off the old relay-based traffic light controllers, you can see the short (50ms?) dead-time flicker as the lights change modes, even for lights that aren’t changing state. They go X -> off -> Y and where X==Y on a particular bulb, you can just catch the blink that was previously invisible with an incandescent.
Most LED’s are PWM’d to some extent, most modern cars have LED lights and they appear to “jump around” in a very odd way in your rear view mirror because the mirror vibrates with the car and moves the image around.
Let me nitpick a little about the sound part. The best human ears (young ones) can hear closer to 20KHz, and I myself am pushing 50 and can still hear up to about 17KHz. The 14KHz number is definitely wrong. Many (most?) speakers and amplifiers are likewise capable of reproducing 40KHz or even higher frequencies, although obviously our standard CD sampling rate (44.1 KHz) is only capable of 22.1 KHz reproduction. Higher sampling rates of 96 KHz or even 192KHz are capable of reproducing much higher frequencies, although they are useless for human ears.
As for sounds appearing “low and dull”–well, that really depends on what you are reproducing. I don’t believe there are many natural phenomena that generate frequencies above 20KHz at significant amplitude; most species that can hear very high frequencies use that ability for echolocation or communication with each other. Babbling water reproduced on decent audio equipment would likely sound like babbling water even to an Oculako. If you reproduce babbling water and eliminate frequencies above 10KHz, it still is easily recognizable as babbling water.
I do agree with the premise of the article, that communicating with a species that has evolved under vastly different conditions would be difficult, but some of the sound examples given are questionable.
Yes it very much depends on what sounds are being reproduced. I’d be curious to know what the upper limit on sound frequency is but couldn’t find a good reference for that. I also assume that babbling water is a, more or less, white noise source but this is possibly not the case. Of course, what sounds the Oculako is used to hearing is also anyone’s guess!
I’m not sure there is an actual upper limit of sound frequency, though since sound is pressure waves traveling through a substrate of some kind (air, for example), I suspect that that the maximum possible frequency in a given substrate is in some way relative to the speed of sound in that same substrate. I could be completely wrong, of course.
Apparently it’s around 3GHz in air. Not due to any sort of frequency-speed dependence (for non-ultrasound) but due to the mean-free-path (MFP) between molecules. If the wavelength is less than the MFP it doesn’t propagate well and dissipates.
Ultrasound gets dispersed by CO2 in the atmosphere. For other frequencies air is a non-dispersive medium but for CO2 you get speeds that are dependent on frequency ie; phase velocity. Which causes dispersion +28kHz.
The upper limit depends on the medium. The limits are the speed of sound, and the mean free path between molecules; if, statistically speaking, the wavelength is either approaching 1 molecule per wavelength, or the likelihood of hitting the next molecule is not much greater than thermal noise, the wave will not propagate.
Based on this, you might expect “sound” up to ~5GHz in our atmosphere (speed 300m/s, MFP 68nm); You’d be hard pressed to detect this though.
Attenuation is a big thing. Medical ultrasound, which usually operates in the range of 1-12MHz struggles to propagate beyond 1-2 cm, and even in denser mediums like that lipoprotein honeycomb of water you call a body higher frequencies are limited in the distance they are detected to.
(Using 12MHz ultrasound you’d struggle to get to 20cm and back. This is part of the reason why ultrasound machines will have several probes)
Second is coupling; part of the secret to a good ultrasound is the “acoustic coupling fluid” (aka jelly) without which images at any power or frequency cannot be resolved. I imagine this is akin to impedence in electronics.
Put another way; the alien may be able to hear ultrasonic range well beyond the MHz, but not in our atmosphere at human-scale, and not with an ear constructed like ours.
You also need to take into account that sound(physical interaction) is not sound(perceptual sense), but vibration of atoms, moving back and forth, from each other. Sound(perceptual sense) is a perception created by a conscious mind, and exclusive to life, and large brained animals. If the phonons are low energy, than vibrations are the result, if higher energy, then heat is the result.
14-16 kHz in audio CD reproduction comes from the early days of CDs because brick-wall lowpass filters and proper reconstruction algorithms were not available.
You basically get quantization error as you approch the 22.1 kHz Nyquist limit of CD audio, and simple D/A with lowpass schemes would cause a beat frequency error to appear in a similiar way that aliasing happens on a television screen both above and below the resolution limit. As a result, the early CDs were mastered with the low-pass filter set to around 16 kHz since there wasn’t anything useful you could get out of it beyond that.
If it wasn’t for the limitations of the U-Matic video recorders with PCM adapters used in the early years of digital mastering for CD audio, it would be 48 Khz instead of 44.1 Khz. 48 Khz was the standard ( a very new standard at the time) for digitizing audio. Using the PCM adapter, a U-Matic video tape could store six samples of audio data per video scan line. With PAL’s 294 lines on U-Matic, it came out to exactly 44,100 samples per second.
Storage problem solved by leveraging equipment already in production with a box of solid state electronics instead of an all new tape format and machinery. U-Matic tapes were already there and had strong shipping containers for sending TV shows long distances.
By adding one piece of equipment, any video recording studio could become an audio recording studio for CD mastering.
The technology even trickled down to the prosumer/consumer market. Some manufacturers of Betamax VCRs offered PCM addons for recording and playing digital audio onto video tapes* and there were some very limited releases of commercial Betamax audio recordings before the CD killed off all other consumer digital audio formats.
*NCR had one for their V70 and V71 Beta VCRs. IIRC up to 24 hours of audio could be recorded on the longest Beta tapes. Those NCR made VCRs had zero Sony made components, which meant NCR paid a higher licensing fee per unit than companies who bought core components from Sony. I still miss my V70. It had around 40 inputs, outputs, knobs, sliders, jacks, meters etc encrusting the front and back panels, plus it had a NiCd battery backup for the clock and program timer and a light with a mirror that flipped down for visual confirmation of tape remaining through the smoke tinted transparent tape flap on the front. Just in case you didn’t trust the digital counter.
The question about a tree falling in the forest crops up here. One question here is what is the highest frequency of vibrations that can be transmitted through the atmosphere. The other question is what is the highest frequency of ‘sound’ the human ear can perceive. That is determined by the hairs inside the cochlea.
Those audiophile sound cards that can record at 192kHz 24-bit although useless for human hearing are fantastic for RF reception at VLF, ULF, SLF, ELF frequencies.
The idea of high frequency recording and playback is that the high frequencies modulate the low frequencies in an audible way.
There are sound cannons/projectors that operate by having two coincident ultrasound transmitters, which produce an audible sound by interference at the point where the wavefronts meet. If you stick your head into it, you’ll hear sound, and everybody else hears the sound as comic from you.
I would start with a LED with some basic on/off patterns, than try to introduce basic arithmetical concepts, wait for alien to respond with same methods to assert we are on a same wave. From that I would be able to make associations with other, more comfortable communication media, all the time checking if we are still understanding each other by means of tests (like 2+1 = 3, 3+1=?) And so on to more complex concepts to widen common language. I would assume the alien is also thinking hard on what I mean and how to respond to get undestood.
Us an incandescent bulb… it puts out light across a wider spectrum, better chance to hit something that is visible to an unknown.
Nice. Going further, we may downshift to pure mechanical movement, which is most basic and universally perceptible for any matter-based lifeforms.
Reminds me of this part of the film contact: https://youtu.be/-SbKE_U4b7U?t=3m56s which also reminds me I should really read the book. :)
This audiobook section of Contact on aliens transmitting prime numbers is also fun: https://youtu.be/nzC4tZ7suYc?t=51m38s
We already have “alien” life to try to communicate with: animals!!! If we can communicate with them, what makes us think we can do better with off-the-earth variety? Seriously, we have so much to explore in communication right here.
The majority of the species on the planet don’t even approach human levels of intelligence. The few that do are eclipsed by our children. And of those few, we do communicate with them on a rudimentary level. We can train dogs to anticipate our needs, elephants to paint, great apes can use sign language, African Grey parrots can use our languages, dolphins are trained to find mines or assist swimmers, and ravens and crows can solve puzzles and mimic out words. The communication may not always be the most meaningful, but it is communication. Alien life may view us as we view Koko (Gorilla) or Alex (African Grey parrot), but the premise is that Humanity is not the first intelligent life in the universe. Maybe that’s wonrg and we end up being the first intelligence, but even then there could still be other societies out there.
That is a very, very narrow, anthropomorphic and way of thinking …. i would not call that intelligence!
Yes, since humans are so intelligent, we should be able to hack ourselves down to the level of animals in order to communicate with them at their level. Animals communicate according to their needs and senses. Bird brains are very small (due to natural selection of a small efficient brain that can be lifted in flight) but we are now learning some can see magnetic fields. So what may have seemed “random” to us wrt animal behavior may have more to do with animal senses being different as this article points out.
Also, bird songs have been shown to have been taught from one bird generation to another and from one location to another location. Not just random chirping. Who knows what else we are missing that is right before us. Maybe it conveys more meaning than, “here I am/ I am breeding/food is here/danger” It may be limited since a bird has simple needs, but it might also convey strategy towards their goals.
Yes, the list is endless. There is basically NOTHING particular to humans, except their physical / genetic heritage. Culture ? apes have some. I mean we can see it by apes, because it´s closer to our culture. Languages and local languages ? birds do it. War ? ask some ants. They practice genocide. Drug use? drunk elephants rampaging villages, stoned deers hallucinating on the roads… “deviant” sexuality ? raping, homosexuality, group sex, inter-species sex… i´l stop here
But we don´t know what other animals think. And not only we are unable to do know it, but we bend all what we examine in the prism of humanity.
Two questions 1)how would you define intelligence 2)if we intend to communicate with something, how can we communicate if they don’t have intelligence that is in any way similar to our own? Are rats intelligent because they can solve a maze or because they laugh? Sure, but not to the same degree or manner that humans are. Are cetaceans and cephalopods intelligent? Definitely, in the same or similar manner that humans are? Unclear.
Reminds me of the scene out of ST-“The Voyage Home”. Where the “alien probe” is sending out messages (to Earth), and they ASSume it’s for humans. Wherein, Mr Spock reminds them that – “There are other forms of intelligence on Earth, Doctor. Only human arrogance would assume the message must be meant for man.”
Also, what is to prevent bionic augmentation of whatever hypothetical alien species ? Sort of like Riddick’s eye’s in “Pitch Black” – ‘Then you got to get sent to slam, where they tell you you’ll never see daylight again. You dig up a doctor, and you pay him 20 menthol Kools to do a surgical shine job on your eyeballs. ‘ (so you can see who’s sneaking up on you in the dark)….
We can not hold a conversation with an animal simply because they do not exhibit the capacity to do so, not because we have not found a way. Pretty much that’s why we are the dominant species, we evolved the capacity for complete language as we know it.
Other species have come close, for example some human-animal communication is even two-way – examples; sign language communication with great apes (ie, Koko), simple spoken communication with parrots (ie Alex).
Predominantly (but not exclusively) one-way communication is evident in human’s interactions with for example dogs – they have not really evidenced sufficient capability for language or higher thought to be able to form their own language constructs, so it appears likely that we have gone as far as it is possible to with most animals at this point, until such time as they evolve higher language capability – we might be waiting a while!
You don’t even have to go down the evolutionary scale to find aliens. Anyone who is not a citizen of the same country as you is an alien.
Ps-is it just me, our do these videos on hackaday “self destruct”. I can only view them once. After that, they are a screen full of links to other videos.
Have you never watched a YouTube video? No offense. You can just click the litle circle like arrow on the bottom left.
No circle is there. I’m always on hackaday with my phone so maybe that is why. A simple workaround is to hit refresh on the webpage to bring back the video.
Scraping the bottom of the barrel today are we? Had to resort to Internet Creepy Pasta archives? I like SCP-328 too.
Er, it’s an article on human perception, and in particular how it’s weaknesses allow us to invent handy things like video and CDs. Not a scary alien story. Okulon or whatever doesn’t spawn face-huggers.
If we had full colour perception, a TV would look like one of Andy Warhol’s Marilyn Monroe prints. A lot of separate, but superimposed, R G and B with the other colours missing.
Colours of light don’t really “mix” at all, except in the brain, and that’s just our limited perception. Caused by the eye using a cheap interpolation hack, instead of real colour perception. Something they don’t teach when they do basic colour theory in schools. They really should, “how do colours of light actually mix” is something that occurred to me one day by myself.
Red and green don’t actually “mix” to “produce” yellow at all, despite what schools teach us. It’s just an illusion. Why don’t they teach us the remaining part that actually makes sense of it all?
Yes spot on. And I agree it’s strange that colouring “mixing” being an artifact of human perception isn’t taught in school explicitly.
As usual, the scientist fails to grasp philosophy. Your premise that aliens with more advanced anatomy cannot “read” our communication technologies fails to consider that successful communication of any kind requires dialogue more than data gathering. Take SETI. They search for patterns that do not occur in nature, a sufficiently high signal to noise ratio. The technology and method are as basic as it gets. In the same vein, an alien that saw a scan line on an LED display would still know that something intelligent was happening. Thus, Sagan was correct that replay is both the easiest and ultimate SETI signal. Interpretation and context always come later. Repetition develops understanding. If you point at yourself and say Alice then point at the child and say Bob, eventually the child realizes that his name is Bob and his teacher is Alice. Communication between human and alien would necessarily be similar, with each taking turns as student. With apologies to McLuhan, the pattern alone is the message.
It’s not really about aliens, that’s a metaphor to explain human perception vs the way things actually are. The “alien” is about how reality actually is, beyond our naked perception.
Even your example of pointing and saying things is flawed, most animals in nature don’t understand what a human pointing means. Some dogs and higher primates understand or can be made to understand, but those are examples of an animal that was basically created by humans over many millennia and animals who also have fingers and a relatively recent ancestral divergence from humans. There’s no reason to assume that an alien would ‘eventually realise’.
We also have, and in the past often only had, slow scan images transmitted via shortwave and even satellites. We could not see it with our bare eyes, but we know to collect the lines to form an image on a medium our eyes can perceive. You don’t necessarily need to view stuff ‘live’.
we are ALL Aliens then ?…we ALL view things similarly, but once it gets to cortex level, thats where individual interpretation differs ? im just spouting, love everything about making community, adafruit, lady ada,phil torrone..wish i had their mental abilities.. wouldnt be amazing if we ALL could have Einsteins IQ ?
QFT! There’s enough differences in perception between human cultures, long before you start to consider extra-terrestrial intelligences, or AIs. But it’s a good though exercise, and might even give us insights into communication between humans.
So what could we infer about this Oculako? How big would the eyes be, the ears, the “brain”, etc.? “Why grandma, what big ears you have …”
Certainly this depends to some extent on the technology used in the sensors, lenses, processing hardware/liveware, etc. The resolution of our vision is limited by the size of our eyes, the density of receptors, etc. A different organism could use different sensors, but still physics and chemistry place some limits on those (e.g. can you in theory make a sensor or signal path smaller than an atom).
This seems like a pointless article. If Aliens are using televisions and lcd monitors to view our communications, presumably they have come to Earth. If they have the technology to come to earth, they presumably should be able to figure out how TV’s work and deduce they need to modify their view of the screens just as you have with your video links.
For an organism to have spectrographic vision each individual receptor would need a nerve bundle as large as that coming off the cochlea in the ear. Unless this “advanced” eye used a scanning method and multiplexed the sensory data along a smaller number of bundles, which would then induce other limitations such as a lower perceptual frame rate.
Nature does not optimise for extended feature sets, it optimises for efficient adaptation, and this tends to reduce complexity down to the minimum necessary to counter selection pressures (not getting eaten etc.).
Why would you assume that an alien was bound by the constraints of evolution? We are already beginning to augment our bodies with technology in numerous ways, in a high tech future we might use spectrographic cybernetic eyes, or we might choose to alter our own biology, or something else entirely. Certainly the electronic implants are already being worked on here with partial success in restoring the sight of people with damaged eyes.
Yes, but only if you jumped out of the context of the metaphorical “other that is different in how it sees the world”. If you are talking about an advanced self modifying entity then it would not have any limitations that would make it relevant to the a essay above. It would be able to find the data in any signal.
Our eyes, for instance, under optimum conditions are within ~2x the Rayleigh criterion given their size. With really good signal processing you could get better, but not infinitely better: so if an alien had eyes of similar size to us, they’re not going to have orders of magnitude better spatial resolution.
It’s completely reasonable to believe that an advanced entity would hit a practical optimum in terms of sensor resolution, sensitivity, color separation, etc.
Again, a flawed argument, if physics limits an advanced alien then it limits what we can do even more, therefore the alien can always sense, and understand, more than we can.
“Given the technological advances since the 1970s how would you design this era’s golden record?”
Most of the tech advances of the last 35 years would (1) be needlessly complicated, (2) would require more hardware to be shipped with the record, and (3) would make vast assumptions of what the finder could make of it. I would say that an analog record and playback cartridge would be just the things to ship of on an interstellar probe today. The design of the Golden Record could be figured out, given little or no other information than what is on the record itself, by an enterprising ham radio operator today — you couldn’t say that about anything even slightly more complicated, IMHO.
The problem with data compression, just as a corollary to what you’re saying about modern technology, is the better the compression, the more indistinguishable the data gets from noise, if you don’t know what you’re looking for. All repetition and predictability is optimised out by design.
I think with the record, the fact it’s a continuous track of apparently meaningless waves, would imply that the waves store some kind of information. Assuming the aliens have hearing is a bit of an assumption, but if you don’t evolve in a literal vacuum, hearing makes sense as a way to perceive the world, and later as a means of communication.
” Its high resolution eyes easily able to pick out the individual pixels in our displays all it sees is a curious ever changing mosaic of color with no discernible meaning.”
Assuming it never evolved to combine adjacent colors into meaning. If certain colors are a “”shape”” to it, it might more or less amount to the same thing. It doesnt see the combination as we do, but it understands the pattern of colors next to eachother.
The question is if it had evolutionary pressure in this direction. Does pigmentation on its world ever require “pointillism” to be interpreted by it?
I must admit, I am fairly ignorant on earth – do we have animals or plants which also use alternative colour pigments when viewed at that level?
I think the argument of the article was that the theoretical being views things at such a high detail it can’t see the overall pattern. Like if humans stare at a micrograph of everyday objects we can’t always identify them for what they are, but zoom out a little bit and the pattern becomes clear.
As for alternate color schemes, many insects and a few higher animals (sharks) see UV and/or polarized light. Bees and some beetles are thought to navigate in part by polarized light patterns in the sky. So depending on what the question* is, yes there are animals that use different parts of the spectrum in ways humans don’t.
*Q1) Is there more information contained in the light beyond it’s color. A1) as above, yes animals use polarized light to navigate Q2) Is there an animal that uses color as an abstract idea outside of just discerning different objects A2) Not that we currently know of.
Humans use colours to navigate but not just with the shapes on our signs, but with actual colours… ie. traffic lights, especially at night vs daylight. As such, it is not such a reach that an alien as intelligent as we are would also infer context from the differences between various frequencies.
As far as A2, chameleons change colour to show emotion (and NOT for camouflage!). As do squid. Flatfish change colour for camouflage, you’ve possibly seen pictures of them floating over chess boards doing an amazing job. Whether they use it for other things, I dunno.
There’s also the widely-known natural signal of “Bright yellow = poisonous”, applies to bees and wasps (well, their sting) and other insects, frogs, and snakes, perhaps more. Similarly brightly coloured berries, often red, can mean “poison”.
“I think the argument of the article was that the theoretical being views things at such a high detail it can’t see the overall pattern. ”
Sure, But I was conjecturing that it would be odd for that resolution of sight to evolve without also a expanded ability to recognize stuff from it.
Q1) You mean like a spectrohraph? So the aliens could tell some of the elemental makeup from the absorbtion lines?
A1) Humans could too vaguely. I lost the link but there is a slight polarization effect even humans can see when looking at the sky. Was something to do with the blotchs you can see at times.
Even if the alien’s colour senses are different, humans do quite well watching black and white films. Fortunately big versatile brains can compensate for lacking sensory data. Indeed pareidolia is when we do this too much.
Yep we call them illegal aliens because they don’t have the right paperwork ;). Way cool to see these vids. Had the theory about this decades ago but never a nice video to show it in real life. Sometimes we miss the good ol’ ion canon screens like for realistic gameplay in mame cabinets. But indeed luckily technology moved forward and LCD is way easier on your eyes if you spend hours per day in front of a screen.
Lets look at their environment to determine their ability to perceive. They’re intelligent, so their “brains” need energy, and have evolved. So their primary source of energy like ours will be solar (or it will be their food source’s primary energy source, like ours). This means they’re good at seeing their sunlight: a distant fireball Their sun is not a brown dwarf (they would have fallen in by now).
Yes, they could have moved 100 times since that upbringing, but their transportation HUDs were made for the initial group, and it ain’t broke.
Nah, doesn’t follow that they couldn’t be blind. Or see in Gamma rays or something. The human-visible spectrum is a tiny fraction of the EM band. Doesn’t even follow that they’d necessarily draw their energy ultimately from a star, could be chemical energy. There’s weird little things at the bottom of the sea that are entirely powered by chemicals from deep-Earth vents, no input from the Sun at all.
You’d need to know their actual environment. But then again if they’re as advanced as us, they’re going to have ways of storing information and converting it into something that can be perceived. Like record players etc, even if they have to dig one out of a museum. If they’re smart they’ll get the idea of the Voyager record. They’d have to be fairly smart to intercept it in space, unless it ends up on some bit of rock with no atmosphere to burn it up.
It was a message in a bottle anyway. Probably the sending of it was more important than the chance of someone receiving it.
By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more
35khz Ultrasonic Transducer Suppliers for Sale
Ultrasonic Equipment Manufacturers, Ultrasonic Food Cutting, Plastic Welding - Kehai,https://www.kehaisonic.com/