(Image from Stefan Alfbo's Post)
What do you think of when "Referential Transparency" comes to your mind? Perhaps you are thinking about purely functional programming languages. But People who think of it only this way, do not understand referential transparency.
I have been browsing Stack Overflow and found no answer with proper understanding about it and so it's no surprise that I know of no language which allows for expressing real-time and system-programming behaviour in a referentially transparent way.
Usually, referential transparency (in programming languages) is seen as a category of expressions or functions (which it is not) whose result value can be replaced by a constant value witout changing semantics. Coders believe, it is necssary to constrain themselves to obtain referential transparency. They think of functions which do not have side effects. While this is just a special case of referential transparency, this is not the definition and also an unnecessary limitation.
Here is, how I understand the original definition of Referential Transparency:
Referential transparency is a property of a type system. With referential transparency it supports types whose instance values can be used to replace any language expressions without meaning a change in the execution semantics as specified by the given language.
This means, we could have a referentially transparent OOP language if the type system would be complex enough and verifiable. Referential transparency is not generally related to side effects or explicit syntax.
According to the Wikipedia article about it, "Referential Transparency" seems to be invented as a term by the logician Mr. Quine, naming a concept described in Principia Mathematica. I am slightly reluctant of mentioning this Wikipedia article because even the explanatory examples of the article are wrong.
- Side note: The Wikipedia article claims
_ lives in _
(e.g.She lives in London.
) is referentially transparent and_ contains _
(e.g.'London' contains 6 letters.
) is referentially opaque. Both does not hold in general, there is no genuine referential transparency in natural language predicates. It is the verbatim context (characters appearing inside a string) which makes some part of the language referentially opaque. Of course I can correctly say "The name of England's capital contains six letters." and I can say "She lives in 'London'".
Is code inside string quotes referentially opaque? Not in general. Funny thing is, did you know that Mr. Quine also is the one who came up with the idea of string interpolation in logic? (He called it Quasi-Quotation, using ⌜…⌝ as quotes.) Because in the context of string interpolation, we can make string contents referentially transparent.
Check out this case of referential transparency: She lives in ⌜{_}⌝
or ⌜{_}⌝
contains 6 letters".
But what is the problem about thinking referential transparency as side-effect-less functions? Because side-effects matter in practical software, the ability to hide extraneous mechanical details with side effects can reduce reading and writing overhead. The rejection of side-effects is subjective. It even limits expressivity. One consequence of the view of "referential transparency → no side effects" is, strictly referentially transparent languages (that I know of) have no realtime semantics (Haskell only has one for debugging purposes).
Truth is, purely functional programming languages have side channels and side effects too but they are deliberately omitted from the language's semantics specification. If not optimized away, function calls have time and memory cost. Leaky Abstraction is another catchphrase for this issue which occurs when explanation details are excluded from specified semantics of abstractions, eventually leading to effects that cannot only be explained within the specificed semantics of the language. Here, the attribution of information leakage to abstraction is the same kind of misunderstanding. (This means, I do not agree to the law of leaky abstractions.)
Referential transparency is not a property reserved to functional programming. The problem just is, other popular languages are not even trying to enhance their type system to explain side effects in terms of values. (For me, a type system is equivalent to a formalizable system with a non-empty set of verifiable semantics). Of course, for a competent human being, the semantics of programming code is sufficiently transparent (even if you sometimes need compiler- and computer-specific knowledge unfortunately). If we ask a competent developer to replace a function call, they are able to do it (if possible).
Global variables, (real)time, space requirements or other system functionality (drivers, storage access and system calls) are typically considered to be side effects (technically, memory access is also one due to modification of caches). The reason for this definition however is only because this kind of stuff is not modelled in the type system. (Economists call this external effects, stuff that they like to exclude from their model even when it has a relevant effect.)
Without changing the definition of side effects in programming language context, they can be made referentially transparent if their details are modelled in the type system. We then could be able to construct values to satisfy these types.
Nothing prevents us from defining types with time, space or other side effect semantics, except for too narrow conceptions. A value with a duration of 3 seconds could be interpreted as a busy waiting loop of sufficient instructions when it takes effect. Alignment, location and memory constraints could also be part of a type. It just needs to be a first-class citizen of the language.
Another impediment could be, it makes type checking more complicated. Yes but I don't think this is good reason. Limiting users because of laziness or lack of inspiration.
This is the end of my monologue for now. You think I confused something? Let us know in the comments.
Top comments (0)