Common myths and misconceptions

Should you turn your device off at night? Do you really know the difference between a computer and another smart device? Can changing your DNS resolver speed up your internet? Is a firewall the be-all of online security, or no longer needed?

In nature, nothing is black and white nor are most aspects of it so simple. Even with computing, networking and many other technologies, this is also the case. Yet despite being less advanced than biology and with increasing experience as people take up modern tech, people still seem to fail to really grasp what's going on with the devices they may use everyday and depend on with so much. People have a habit of knowing a little bit about something, even if it's wrong, and over-estimating their knowledge, understanding and competency. People tend to fill in the "gaps" with their imagination or stuff that they do know, even despite logical inconsistencies. People seem to have a love affair with myths, half-truths, etc and like to choose what to believe in and it's always what pleases them, or satisfies their agenda. It may sound crazy, but these are examples of what psychologists call the "Dunning-Kruger effect" and "wishful thinking". As Stephen Hawking once said: "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Then there's pseudo-science and fake/alternative "facts", which are deliberate lies or twisted truths for the purpose of manipulation, but all this goes outside the scope of this section.

So below you can get a glimpse on some relatively common myths and misconceptions (misunderstandings) and why they're not (quite) true. Simply click or tap on the title bar or V icons to reveal points for each sub-topic.

(Remember you can dispute any claims using the contact form here)

General computing
  • Misconception: "A computer is basically just an electronic typewriter tethered to a TV".
    • A computer is a device, machine or system, typically electronic, that processes information automatically, according to a set of commands or program. Computers are used in a range of devices, from modern TVs to ATMs, from games consoles to servers and Wi-Fi printers. Routers and many smart devices also have them, as well as DVD players and many HiFi systems which are often overlooked, even forgotten about by experts. Basically, computers are everywhere now, with or without a keyboard and screen.
  • Partly-true/out of date myth: "Macs and Linux PC's are no good for gaming".
    • This may have been mostly the case as late as ~2010, but even then they were still suitable for games built for these platforms. Nowadays, many programs can be emulated on them (especially Linux Operating Systems) or have editions that can work flawlessly on them. The main issue for Linux however, is getting compatible drivers up-to-date and working, and the game isn't completely dependant on some obscure software tech that simply won't run right on the unintended OS. Windows has a similar problem with very old games that never got updated and re-released.
  • Misconception: "Bits and bytes are interchangeable".
    • Unless it's a mistake, a bit is one binary digit, whereas a byte (or octet) is eight bits. A "byte" used to refer to multiple numbers of bits, but should always refer to 8 now. Usually a small "b" means bit, while a capital "B" is byte.
  • Myth: "The youngest are best at computers and internet stuff" or "One is too old for all this tech".
    • Firstly, as of 2022 many of the oldest half of the population have had little or no experience with computers for many years if not decades and were not taught a great deal of relevant skills when they were in school, in contrast to the younger half.
    • Secondly, younger brains tend to take up learning things more rapidly, so the older one gets, the harder it is or longer it takes to learn and get used to something new - but even being particularly old, one can still learn unless one is in advanced stages of dementia.
    • Lastly, but not least, there's the belief phenomena - where simply thinking one can't adapt, can impede on their ability to adjust, stop them from even trying and then perpetuate this myth and lead their peers into the same mindset. People mostly rely on ICT skills, not talent, and one still gains more knowledge and experience as they go on, it only may take more time to adjust to unfamiliar things. Therefore you may actually find those around 30 to 40 are most employed and skilled in ICT sectors, increasingly with 50+ year olds until retirement age. 20 - 30 year olds however may be more attracted to high-tech jobs, but they may still need a lot more training and experience yet.
Internet
  • Misconception: "The internet and web is the same thing and interchangeable, or are similar".
    • The internet is a global computer network (or network of networks), or a medium for data transfer, that web services use as well as other applications. It's a physical infrastructure, as well as a virtual medium that uses a common packet-based (or datagram) protocol/method.
    • The world wide web is more of a concept, of interlinked and interactive multi-media documents, or library of information mostly for humans. It usually relies on the internet or Internet Protocol to exchange information in order to be read or used by remote visitors. Website servers and clients normally use Hyper Text Transfer Protocol (or a variant of) on top of TCP (seldom UDP-based) and IP.
    • "Internet", "web" and "interweb" are interchanged informally or when confused. The latter is often used in mockery of those na├»ve to the fact, or new to "online" business, but may also to refer to the confused concept of the two.
    • The two were probably mistaken from around the 90's when the majority of novices began using the internet almost exclusively via websites and the fact they do not directly use the internet or see the activity. Furthermore, web browsers (client software) like "Internet Explorer" came about by the late 90's, giving the wrong impression that the two are very much the same.
    • The TCP/IP suite and IPv4, as well as other packet-switching networking technologies, in fact predates the web by years or decades.
  • Misconception: "The internet works like the plain old telephone network and ham radio".
    • Neither are true or are a close match. Telephony uses circuit-switching, or chains of engaged circuits or virtual circuits, mainly to convey audio. Ham radio works like regular radio, even if it supports digital modulation and data transfer, but ungoverned and broadcasted locally, divided by channels within amateur radio bands. The internet is essentially a global network of packet-switched networks for computers that is facilitated by a common protocol and addressing scheme. The internet therefore works more like the postal system.
    • Dial-up and most broadband technology uses a landline telephone link as a carrier, or electrical and network link. Both need modems at each end. However, most of the internet infrastructure does not use the telephone network. Instead telephony services are increasingly being digitised and carried by the internet.
    • Internet access is technically granted by an ISP, unless one operates their own network known as an autonomous system, and at least has some connectivity to another IP carrier, or connects to other AS networks directly - though typically it's both.
  • Myth: "Al Gore invented the internet".
    • A particular myth in the USA. Best known as a vice president, Al Gore was also an advocate for IT and helped drive the take-up of the internet whilst in its infancy. He mistakenly implied he "took the initiative in creating the Internet"[quote] during a congress, which turned into an urban legend satisfying the desire to answer the question "who invented the internet?" simply. For the most part though, he only helped drive its development or deployment.
    • Numerous persons were involved in the development of the internet. Vint Cerf and Bob Kahn were chiefly responsible for the TCP/IP protocol suite. The first packet switching network, the ARPANET, was mainly started off by Bob Taylor and Larry Roberts from the ideas of J. C. R. Licklider. Several others contributed in it's design and development.
  • Misconception: "IP = IP address and is always traceable, personal or tied to a computer, or inherently correlates to a real location".
    • IP is short for Internet Protocol. An IP address is part of this and is used for delivering data to a node, or terminal or computer application. Each packet has a "to" and "from" address, which is used by the internet to transmit the message to the intended network and eventually the designated receiver and program. Most IPA's are not permanently assigned, nor do any bare any real location information by themselves. For the most part, only ISP's internally can link an IPA to a real location such as a house. However, within a "shared" connection or address, the global address does not usually give away the exact computer, and never the person using it. Only the behaviour, data contents, etc can really deduce individuals, which is neither 100% reliable nor is it always feasible to obtain.
    • IP version 6 allows for truly unique addresses for each interface, or even application, that's also transmitted over the net. However the latter part can be randomly set and changed and typically still bares no inherent location or identification info to anyone besides the end user and the access router or switch which matches the address to a network interface. Bare in mind this basically causes it to transmit signals to the correct circuits - it needs no awareness of real world locations.
    • Tracing or tracking IPA's requires sufficient information from online services, ISP's, etc. Tracking locations of devices is best done with GPS, which requires the device to have a GPS receiver and tracking software running on it. IPA's from wireless clients are only likely to change on the move when they travel far enough to switch network providers. Mobile/cell phones can be triangulated by measuring the cellular signal strength from at least three cell sites.
    • IP addresses (usually in blocks or ranges), are assigned (leased) by ICANN/IANA, to regional internet registries, which in turn are assigned to Autonomous System operators (typically ISP's, datacentres, mobile network operators and large companies) and they then manage the address assignment to end users. While IPA's for servers in datacentres or offices usually are static, many domestic and mobile/cellular addresses are dynamic, meaning they may change after reconnections or even certain periods of time. Static IPA's aren't permanently tied to certain machines either - they still can be reassigned.
    • Some IPA's known as "anycast", "broadcast" or "multicast" addresses are used or received by multiple devices at a time. The former is sent to the nearest or best available server while the latter two are addresses that specially refer to numerous clients, thus many or all within a network will receive a copy.
  • Misconception: "Internet speed means literally how fast data can move, therefore fibre optics transfer faster because of the speed of light".
    • Using this logic means that most of us would be close to the limits already and those with pure fibre-optic connections would be at the limit. But no, the "speed" means how much data it can transfer in a given period, usually a second. The speed of message-conveying signal in electrical cables is pretty close to the speed of light in FO anyway, while light or infrared in FO travels nearly at the speed of light in a vacuum. While it's still quicker, the main reason why FO is "faster" is because it can carry very high bit/data-rates over much greater distances than copper or other electrical conductors can. It's also highly (if not completely) resistant to electrical noise or EM interference, making it very reliable despite the higher data rates leaving "little" error-margins.
    • Those with pure FO connections can still experience poor downloads/uploads, due to certain links or servers becoming congested or faulty, or other unrelated issues. FO links can also be downgraded, used only to exchange very low rates. Like electrical connections, this often happens automatically when errors are detected, as it means the cable has been damaged or another reason it can't sustain a densely encoded signal.
    • Wireless telecoms are far more prone to interference and often contest with others, and has some additional processing overhead, which is why the speed varies greatly and often add 1-10+ milliseconds more to transmission delays. They often result to very low bit rates too, to compensate. The delay of radio/microwaves propagating through air is insignificant compared to this, travelling over 180 miles per millisecond - similar to light/IR in FO.
    • The main reason why server closeness improves down/uploads is because of less chances of packet loss or disorder (which often requires retransmissions or waiting before it can resume) and, for TCP in particular, a sender sending a limited number of packets at a time has to wait less before they're acknowledged to resume; the latter is thanks to lower network latency or round-trip-times.
    • Remember all data is sent in chunks, piece by piece, each packet typically limited to 1.5 Kilobytes or less, with the largest only taking proportionately more time to transmit because they're longer pieces and packets/frames must be finished in being received before they are processed and sent to the next entity.
  • Partly-true: "Changing your DNS (resolver) can speed up your internet".
    • This only applies to loading websites and connecting to most online services at first. The DNS's basic and main use it returning IP addresses from domain names (such as doesnotcompute.info), whether it's typed into a URL bar, loaded from a bookmark or written in an application's file. Getting a query resolved quicker can help load things quicker, but it unlikely to be very noticeable and it won't increase download speeds or make most activities quicker. This is because usually once an application has resolved a query, not only does the Operating System at the least may cache it but, most of all, the application may maintain the connection anyway, not needing anything to be resolved again.
    • All IP packets use IP addresses, not URL's or domain names. Virtually all applications maintain connections as long as they're needed and thus the IP address of their peer/server. Should they need to resolve again so soon, they're likely to be given a cache by the OS or other resolver.
    • One exception is that some complex and content-rich services may embed many different sources that each have to be resolved, therefore having a fast resolver can help it load and display all quicker.
    • Another exception is e-mail servers that need to send out e-mails to many different domains at a time will also benefit from a quicker resolver. Not only for MX records and IP lookups, but for things like SPF and DKIM records too.
    • A fast DNS resolver might be, well, fast to ping, but it doesn't guarantee all resolutions will be as speedy nor does it mean it won't have problems with certain Top Level Domain or authoritative name servers. In fact, the latter is the one most likely (compared to roots and TLD's) to be a single server housed thousands of miles away with poor connections - and this is the key piece that answers queries with detailed records!
  • Myth/Misconception: "Once you put/enter anything online, it's "out there" or up for anyone to find or take" or "Anything online is broadcasted to all, for any hacker to pick and probe".
    • The internet doesn't work like ham radio, or radio anything. It works more like the post (snail mail) and at its best it individually and securely transmits these messages to the designated recipient (such as an authentic server) which should securely store data it is authorised to - not compelled to reveal the data to anyone it shouldn't unless they find a exploit that evokes the revealing process, if there is one. However, one can't always be sure that data won't be eavesdropped on (this is too often possible) or data is misdirected to the wrong place and then captured (misconfiguration does happen); therefore encryption and other mechanisms must be employed in order to prevent the data from being read and/or understood. For the most part however, the internet transmits data end-to-end, through cables, network routers and switches and to servers. It's not "broadcasted to everyone", although one should always be careful about what, where and how they're submitting, and use various reputable security solutions.
    • Unencrypted wireless connections are an exception, however to pick up this an eavesdropper needs a receiver and data capture program and needs to be pretty close (such as 30 metres) or further with a large and powerful enough antenna and amplifier. If the applications such as a web browser and website use a good encryption scheme though, the eavesdropper won't be able to make out much.
    • While it may be impractical to remove something online, some content can be irrecoverably lost. This can happen if the only sources of data is deleted or destroyed and nobody else has an independent copy of it. Public websites however are often "scraped" by web crawlers (bots or automatic programs) and then republished on sites such as search engines and webpage archives.
    • The internet does not and can not, by itself, connect everything. Software interfaces (e.g. server applications) with enough privileges is needed additionally.
  • Misconception: ""Cyberspace" means the "internet" or "web"".
    • This was coined in a 1984 sci-fi novel "Neuromancer" to describe a kind of virtual world. It was then later used, informally, by some to name the internet or world wide web. The author later stated the word was essentially just a buzzword.
    • The word "cyber" in fact was short for "cybernetics" and is used as a a prefix to refer to interactions or integration of biology and technology - electronic or mechanical. The "space" is somewhat a misnomer too, as the internet is a telecommunication medium rather than a space for storage and whatnot. It also uses the real physical space we're familiar with; it neither is an ethereal one or nor provides another spatial dimension. Any "virtual world" it provides is in fact provided by the applications on top that use it.
  • Misconception: "The internet lives in datacentres or server farms and DC's run internet servers".
    • The internet provides connectivity between peers or nodes. It doesn't store and process for services by itself. There isn't really such thing as an "internet server" either. Servers are used to provide application-based services that now typically uses the internet. Servers don't even need to run in datacentres and some still aren't.
    • The internet "back-bones" are normally linked together in datacentre-like buildings, but mainly use routers and other networking equipment that allow different network owners to exchange IP traffic. These are often known as "internet exchanges", while locations where a network owner's equipment is sited is known as a "Point of Presence".
    • By extension, the internet in fact lives everywhere which has a router, wireless access point or network switch with (global) internet connectivity. However, essentially it's the infrastructure between peers.
  • Misconception: "The internet is one big entity or like a hivemind of people and computers".
    • It's much more like a collection of many smaller interconnected networks, that use and provide one globally spanning service. There are some parts in the world where access is heavily restricted, or even one or two countries that only run their own isolated and highly limited network. The true internet however is the one and only common network despite it not being wholly owned by any individual or organisation, nor being one unilateral entity.
    • The internet doesn't use all computers, or even that many, to function. Computers can "talk" to each other but not without the purpose of programs and the users or engineers behind them. It is neither sentient nor like a hivemind - the latter is a phenomenon to describe a behaviour involving many people particularly with social media. Sometimes "hivemind" is a rhetoric (a provocative but insincere or meaningless statement) used by those condemned for an action or belief, in order to dehumanise or invalidate the reacting masses. Mobbing and the inclination to form or join like-minded communities is a common human trait, not that the latter is usually a bad thing.
    • While major outages can and do happen, taking down millions of websites and other services or even cutting off internet access to millions of subscribers, there is virtually no centralisation within the system or single-point-failure for everything. Even the DNS's mere 13 root name servers are on redundant clusters distributed across many countries and only do part of the domain resolution - the TLD (such as ".com" or ".info") to a TLD's name server. Even if all root clusters were taken out for a while, TLD servers and authoritative name servers may still function, as would caches and many end user's retained connections.
  • Misconception: "Domain names are just human friendly addresses that DNS turns into IP addresses that distinguish websites".
    • While domains or web addresses are mainly for humans, domain names in URL's are in fact also used to identify websites for both the client and webserver. So although an "A" or "AAAA" record lookup for doesnotcompute.info (for example) indeed returns an IP address, the domain name "doesnotcompute.info" is still passed between web browser and webserver to specify the website. This is the main way the browser can differentiate websites that share IP addresses.
    • The DNS also provides other information too, such as e-mail server domains and security/validation related information for them.
  • Myth: "No one really knows how the internet works, it's a great mystery or secret" or "It takes a genius or a lot to study the net".
    • Although most people don't understand it well enough, a significant number of people do. The basic principle isn't hard to understand either.
    • You can learn a lot if you know where to look, which can be as simple as using a search engine. It's in fact no secret and there are websites that try to break it down in plain English - DNC intends to be one! Many who don't do this either don't want to know, perhaps because they think it'll be a waste of time, it's too "nerdy" or because they believe they won't be able to understand or find out. A few are also unable or reluctant to use search engines properly.
  • Misconception: "Cloud hosting/storage is serverless" and "The cloud is (just) the internet".
    • It's not possible to run online services without servers any more than it's possible to run software without hardware, or computers. Cloud hosting and such may seem like they don't rely on servers because the servers are managed by the host/operator together, invisible or indistinctive to the subscribers and third-party users.
    • Cloud hosting, storage and computing is merely distributed computing and storage. Any cloud service would use the internet to deliver its service and to obtain data itself. "The cloud" isn't really a thing as there are numerous and distinct cloud services, but "the internet" is as it refers to the common computer network.