Blue Tooth Speakers

Discussion in 'Computer Science & Culture' started by Bowser, Sep 5, 2017.

  1. Bowser Namaste Valued Senior Member

    Messages:
    8,828
    So a guy at work brought his blue tooth speaker to work. I was so impressed with it that I bought two for myself. Blue tooth has been around for ages, but I've never really used it until now. The technology has come a long way. Rather than fiddle with DVDs and a huge stereo system, I can open an app on my phone and stream music to a speaker and hear it anywhere I want. Totally amazing.

    Having watched the evolution of electronics for several decades, I can only imagine what will come in the near future. You youngsters have an interesting time ahead of you.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. DaveC426913 Valued Senior Member

    Messages:
    18,935
    This has got to be the first generation in history to systematically accept lower fidelity than their ancestors.
    Digitally-compressed music is crap.
    Digitally printed photos are crap.
    Digitally-streamed video is crap.

    So strange. Advancements in technology used to bring us higher quality, not lower.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Bowser Namaste Valued Senior Member

    Messages:
    8,828
    It sounds fine to me. I've heard others complain about the quality, but it doesn't seem to have any relevance, really. I might add that the sound of a scratched record died with the 45 and LP. .
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. DaveC426913 Valued Senior Member

    Messages:
    18,935
    Maybe the younger gen is just insensitive to the sound or sight of compression artifacts.

    Or maybe they hear it, but don't consider quality to be an important feature over convenience.

    Right, but that is a failure of the recording. Not a designed-in, accepted feature.
     
  8. Bowser Namaste Valued Senior Member

    Messages:
    8,828
    It's probably not noticeable for most. I grew up listening to vinyl, and I don't miss it. The younger generation has nothing to compare. My kids grew up on digital music and probably don't know what a LP is.

    Also, the advantages are too good to go back. For ten dollars a month I have access to 1o,ooo,ooo songs. I can download music to my phone over my home internet and play them offlne anywhere I go. for around twenty dollars I can buy a bluetooth speaker that will connect to nearly all my device. The technology and industry has changed.

    those things were scratch magnets.
     
  9. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    For $10 a month I can keep $10 every month
    I have a fairly large CD collection and have recently moved most onto computer extension hard drives
    There is little in modern music I have been tempted to buy
    While I used to occasionally appreciate high fidelity, as well as earthquake woofers, at my age the subtleties of frequency is lost on the hairs in my Organ of Corti
    Apart from the 1812 I still enjoy the occasional Bolero at FULL volume

    Please Register or Log in to view the hidden image!



    Please Register or Log in to view the hidden image!




    Fixed LaTeX formatting - Kittamaru
     
    Last edited by a moderator: Sep 7, 2017
  10. Bowser Namaste Valued Senior Member

    Messages:
    8,828
    That's where I've been, but I'm thinking I need to expand my range of music. Right now I'm on a 30 day trial with Amazon Music, so the ride is free for now.
     
  11. billvon Valued Senior Member

    Messages:
    21,634
    Reminds me of the people who claimed that amplification was crap, and that only direct phonographs were 'real.' Then transistor amps were crap, and only tube amps were "real." Of course, in both cases, people just preferred one kind of distortion to another. Sounds like you prefer analog distortion, which is no problem at all. You can pay as much as you want for your favored type of distortion.
     
  12. DaveC426913 Valued Senior Member

    Messages:
    18,935
    There's a difference between unintended distortion because we are at the limits of the technology, and designed-in distortion for the sake of convenience.
     
  13. billvon Valued Senior Member

    Messages:
    21,634
    RIAA equalization on phonograph records (starting around 1940, the standard by 1950) was "designed in distortion for the sake of convenience" specifically so they could fit more songs on each record.

    Dolby B and C on cassette tapes were "designed in distortion for the sake of convenience." They also improved the sound quality overall, and allowed cheap tapes to produce better sound.

    One of the nice things about today's technology is you can get any fidelity you want; you are not stuck with the manufacturer's design. Want completely uncompressed audio? You can get that via the old standby WAV. Want to compress it without any loss? FLAC and APE are there for you. They don't affect the sound quality at all, but reduce file size. I imagine there are purists out there who claim that FLAC is "distortion for the sake of convenience" and for them there is always WAV.

    Want smaller files than FLAC can give you? Then you can use MP3 or AAC. Both let you choose what bitrate you want to compress at. Want almost perfect sound? Compress at 320kbps. Want pretty good sound? Compress at 192. Want to fit as many songs on your player as possible? Go with 128 (or even 64, although at 64 the distortion is clearly audible.) Your choice.
     
    origin likes this.
  14. DaveC426913 Valued Senior Member

    Messages:
    18,935
    I ... did not know that.
     
  15. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    Whether or not the sound quality is affected, wouldn't this be dependent entirely upon the efficiency of whatever one is using for playback? With FLAC, while lossless, it's still compressed--couldn't the means by which the file is uncompressed possibly add noise?

    I'm simply trying to account for all those who claim they hear a difference. Personally, I sometimes hear a difference, though I couldn't really put what that is into words--were I to attempt, I'd venture "something at the lower end" or "something to do with stereo separation," but really, I don't know what it is. It's simply, sometimes I think I can hear a difference.
     
  16. billvon Valued Senior Member

    Messages:
    21,634
    Not really. By definition the output data is exactly the same as the input data, so there's no difference.
     
  17. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    I'm referring to the means by which it is played back. Prior to a few years ago, most OSes did not natively incorporate FLAC--most do now, I believe. Could software limitations account for the perceived differences many profess to hear?

    Personally--and frankly--I do not trust my own judgement on that matter. Also, I simply do not have the means to attempt a comparison--in the past, it was always in professional contexts where they had vastly superior shit than I could I ever hope to own. That said, the supposed "differences" I've heard about are of a very different nature than, say, what you hear when you drop from a 320 Kbps MP3 to 192 Kbps, and the contentions were hardly unanimous in pronouncing WAV as "better sounding" than FLAC. Rather, they were just somehow "different."
     
  18. billvon Valued Senior Member

    Messages:
    21,634
    Well, let's put it this way.

    If you play a WAV file through speakers, then compare it to a FLAC file of the same song played through the same speakers, the sound will be identical - because the bitstream being fed to the D/A converters that drive the speakers will be identical. The D/A converters have no way to know if it's a WAV or FLAC file; the data is identical in both cases.

    If the speakers suck then it will undoubtedly sound bad. But that's the case with any playback system.
     
  19. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    Sorry to belabor this, but back up here a sec:

    I know that FLAC is genuinely lossless, and that if you take a WAV file, convert to FLAC, and then decompress back to FLAC, you will get something identical to the WAV file, checksums match, and all that. But, not all media players are the same. IOW the supposed "difference" some claim to hear might be an issue of latency relating to the speed and efficiency of processing the conversion.

    I'm just trying to understand how certain engineers, with the requisite maths and physics and the full knowledge that what they are saying doesn't--or shouldn't--make sense, still insist that they hear some some sort of difference. Obviously, having the requisite knowledge doesn't necessarily mean also having superior hearing, and it doesn't preclude insanity, but still...
     
  20. billvon Valued Senior Member

    Messages:
    21,634
    Right.
    The conversion latencies do not affect sound reproduction, since the D/A converters work at a fixed clock rate (usually 44.1KHz.) Either the algorithm works and delivers data at that rate, or it doesn't work and you hear dropouts (or more commonly, you get no sound at all.) Latency in this case means it might take half a second instead of a quarter second between the time you press play and the time the song starts - but once it starts, the sound is the same (assuming the algorithm works.)
    Because they wish to hear a difference. There have been plenty of examples where people are double-blind tested and couldn't tell the difference between various audio formats. In a recent test where listeners listened to music encoded in 128kbps MP3, 320kbps MP3, and WAV (i.e. lossless) the average score was about 60% accurate. Only 3% could accurately tell the difference across all six songs - and that's identical to random chance (i.e. pure guessing would give you 3.1%)

    Heck, there has been talk recently about the value of 24 bit recordings (250x better dynamic range than the standard 16 bit) so a website ran a test between a large-word-size audio sample and a small-word-size sample. Most people could not tell the difference. The website then revealed that it was actually comparing 16 bits to 8 bits - and people STILL couldn't tell the difference.

    99% of what people hear is what they think they SHOULD hear. Put a cheap amplifier in a $2000 tube amplifier case, and people will think it sounds better than a high-spec Carver amp in an old Onkyo case. (If they can see the cases, that is.)
     
  21. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    And the dropouts would be "significant" enough that pretty much any listener who is paying attention would notice, correct? That is, it would be more than just a few ms--it would be glaringly obvious.

    I've also seen studies of varying legitimacy in which listeners preferred 128 or 192 kbps MP3 to WAV files, though that speaks less to qualitative difference and more to plain personal preference.

    16 bit to 8 bit? That's kind of extreme. I sometimes make devices with things like the old ISD chips (1600, 2500, used in answering machines and such) or HT8950 (?? I think--they used in toys for voice modulation and suchlike) precisely because they sound like crap (and that is what I'm aiming for on certain occassions). Granted, those also have something like 8Khz sampling rates (or even lower), but still... I suspect the difference would be more apparent with longer samples with something more than just a human voice.

    Since you brought it up (well, not really, but kind of), should there be any discernable difference when replacing a germanium transistor (in an oscillator--within a transistor combo organ) with a comparable silicon transistor? I would think not, but, again, there are people who claim otherwise.
     
  22. billvon Valued Senior Member

    Messages:
    21,634
    Yes; it would be like a Youtube video stopping and starting due to a too-slow bandwidth.
    Yes, you would think. But people's ears aren't like microphones, and they can't detect many kinds of distortion the way a microphone and scope/spectrum analyzer can.
    No - provided everything else (linearity, gain, phase delay etc) stays the same.
     
  23. parmalee peripatetic artisan Valued Senior Member

    Messages:
    3,266
    Listen to a vinyl record with very long sides--say 25 minutes, or more, on a 12" 33rpm--and then listen to a non-remastered (but done within the past 15-20 years) CD re-issue of the same recording. The difference is pretty stark, and one that I think most listeners would be capable of discerning.

    Smaller companies who do vinyl these days--and who care about "public perception"--often adamantly refuse to press anything with more than 17 or 18 minutes per side. You can get away with longer sides with music which is less "bottom heavy," but otherwise...

    With regards to recorded music, even analog purists have changed their tune since roughly the early 'aughts. The preference for analog in music these days falls far more heavily to the recording end of it, and the reasons for such preferences have changed: analog is more "forgiving," and one can achieve a certain "sound" without having to process it to make it sound, erm, "not sterile," I guess--but that sound is basically a compromised sound (as noted in post #10). I use a mix of both analog and digital both at home and in studios--mostly, recording all basic tracks on magnetic tape and then dumping it onto a computer. At home at least, I'll probably always use tape for recording basic tracks--without some really nice, and pricey, compressors (in the analog sense of compression-expansion--and actual hardware ones. I'm sure there are adequate DAWs nowadays, but I haven't found them or I don't want to pay for them), it's too much work otherwise.
     

Share This Page