Saturday, January 6, 2018
I was not there, but I suspect new programming languages are preceded by new ways to think about programs. The distinction between code and data inspired opposing viewpoints supported by Lisp in opposition to C++. Lisp treated code as data, and C++ bundled code with data. A more literal use of the word function led to functional languages. To deal with concurrency, I find myself examining the mechanics of functions. A function is implemented as a frame on a stack. Threads and processes are implemented as context in a circular buffer. I believe the first step toward concurrent functions is to blur the distinction between frame and context. Functions can (re/en)-queue themselves or each other to a circular stack. C++ made the "this" pointer common to its member functions. Similarly, I'd make a sequential state a standard argument to each function on the circular stack. The return value of functions on the circular stack indicate how or whether to re-queue the function. Thus, using the standard argument, a function could pick up from where it last retuned. To promote mix-and-match functions on the circular stack, each function on the circular stack is actually a cluster of functions, re-queued as a cluster, and advanced one to the next with another return value. Thus, the standard argument indicates intra-function location, and the progress through the cluster indicates inter-function location. User specified arguments to the cluster of functions are (re/en)-queued to per-type circular buffers. Tagged arguments allow arguments to be shared between functions on the circular stack. Function clusters have a standard argument called a layer that can be changed during the function invocation, is retained for re-invocation, and copied for new clusters. Thus, function clusters are collected into layers with the same initial layer standard argument. Often, they layer standard argument is used as a tag to share arguments across the layer. This permits a function to en-queue a cluster with one component specified as a function that uses the layer's arguments in addition to the arguments shared by the cluster components.
Sunday, December 31, 2017
Some time ago, I read a book by Penrose called "The Emperor's New Mind" that argued against the possibility of artificial intelligence. As indicated in a previous post, I believe artificial intelligence is a misnomer; it should be referred to as hyper-discipline. Penrose's approach was to make plausible alternatives to the assumptions of artificial intelligence researchers. One alternative was that evolution decreases the entropy of life by dissipating the entropy to space. Just as discovery of extraterrestrial intelligence would have more consequence to social paradigm than to scientific advancement, a precise definition of life in terms of entropy would have more consequence to social paradigm than to science. In this post I will attempt to collect some elements of a model of the human condition. With admitted imprecision, I define life as anything that dissipates entropy without creating too much entropy in an environment that dissipates enough entropy, where "too much" and "enough" mean that the entropy of the environment in general decreases, where "in general" is left undefined. Furthermore, I define science as a plane with life and the minimum entropy at the origin. In this scheme, technology is lines in science not through life, and the further technology gets from life, the more entropy it has. To me, the only purpose of wealth is to influence people's opinions. For example, consider Caltrain, the diesel commuter train between San Jose and San Francisco. The beneficiaries of electrification and elevation of this train would include the lower income communities between the expensive communities at its ends, because then they could live inexpensively, and work lucratively. However, a Luddite or NIMBY mentality reduces progress. Who benefits from mentality that reduces progress? My conclusion is that developers are using their wealth to promote mentality that slows progress. A similar argument would apply to climate change. Thus, in my model of the human condition, money is how opinion is shaped, opinion determines which technologies are used, which technologies are used determines the creation of entropy, and the creation of entropy determines sustainability. Note that environments, civilizations, technologies, species, and behaviors that produce more entropy last less long. The moral is that being obliged to discontinue a behavior is less pleasant that choosing to discontinue it.
Friday, October 13, 2017
Recently I have gotten interested in stocks and flows. When combined with feedback and delays, they are revelatory. They are simple, yet their importance is only recently realized. In particular, J W Forrester founded system dynamics in the 60s. Because system dynamics is turning out to be the best hope for salvation from climate change, I feel a little guilty about applying it to my own silly projects. However, the temptation is too great, so I plan to use a dynamic system to make music. The beauty of it is, everything usually done with several different components, such as oscillators filters and sequencers, can be done with just stocks and flows in a topology. In a sense, stocks and flows are the fundamental constituents of musical instruments. Because computers are so fast, the timewheel algorithm is so efficient, and leverage of simple calculations is so great, I feel confident a corpuscular model of stocks and flows will suffice to provide sound of many different qualities. Consider a set of stocks of amounts. Rather than calculate how the amounts change as they flow from stock to stock, instead schedule fixed size corpuscles to be transferred. Large flows are modeled by frequent transfers, and small flows by infrequent transfers. Also scheduled are changes to the flow rates. To model feedback, simply make the changes to flow rates depend on stock amounts, and to model feedback delay, simply schedule the flow changes to take effect some time after their calculation. Because I am already in the middle of a project to model, display, and manipulate polytopes, I will use the polytope faces as flow gates between regions containing stock amounts at points in the regions. Thus polytopes form membranes that in general prevent flow, but have special faces that allow flow dependent on any/all stocks amounts at some prior time. Stocks, as points, reside in areas bounded by overlapping polytopes, and are fed and drained by special faces into their area.
Tuesday, August 22, 2017
Aesthetics are the most important thing about a computer language. Lazy or eager, imperative or declarative, object or aspect, template or macro, strong or weak, none matter as much as pleasant to the eye and mind. Assembly had equilength lines. Lisp was conceptually simple, but impossible to look at. C had logical sounding key words, like if, for, goto. C++ had good salesmanship. Haskell has significant indentation. But to properly take advantage of this most crucial feature of Haskell, one must use single space indentation, and only when absolutely necessary. Also, keep lines and identifiers short. Except after >>=, use one character identifiers near the end of the alphabet for lambda arguments, and except for clarity, use one character identifiers near the start of the alphabet for named function arguments. Name lemma functions the same as the main function, except with a single capital letter suffix. Use let instead of where, unless the variable is used in guards or multiple branches. Except to break a rule, never use do notation. Freely pass IO arguments to functions other than >>= and >>. Use <$> and <*> sparingly. Get to the let as soon as possible in IO functions.
Friday, April 7, 2017
Sculpt displays a polytope, and in interactive mode, the left mouse button (de)selects pierce point(s), changing mode deselects pierce point(s), and the right mouse button switches modes with a menu. Mouse modes are rotation about the pierce point, translation of the pierce point, and rotating about the focal point. Roller button modes are rotation about the pierce point line of sight, scaling from the pierce point, driving forward to and back from the pierce point. Moving the operating system window also translates, so the model appears fixed behind the screen. Mouse and roller button modes are a matrix of submodes of transform mode. Nontransform modes are random refinement through the pierce point with roller button controlled cursor warp, additive sculpting above the pierce point, subtractive sculpting under the pierce point, and pinning two pierce points and moving a third pierce point by mouse and roller button.
Directory .script, or the -d directory, has numeric, space, embedding, and polytope representations saved, timestamped, classified by backlink automatically, and named manually. And .script, or the -d directory, has configurations such as light color and direction, picture plane position and directions, focal length, window position and size, and refine warp.
Option -i starts interactive mode, -e "script" loads a metric expression to periodically display heuristic animation, -d "dir" changes directory, -n "dir" initializes new directory with current state, -r randomizes the lighting -o "file" saves format by extension, -f "file" loads format by extension, -l "shape" replaces current by builtin shape, -t "ident" changes current by timestamp or name. Options are processed in order, so interactive sessions, animations, directory changes, initializations, loads, and saves can occur in any order. If .script or -d directory does not exist or is empty, it is created with regular tetrahedron, random lighting, and window centered on screen. It is an error for -n directory to exist, for -f file to not exist, or for -o file to exist. Errors are recoverable because directories contain history, and error messages contain instructions on how to recover.
The look and feel of sculpt is turn based. Even metric driven animation only updates the displayed vertices periodically. The rotations and such occur continuously through matrix multiplication, but the model remains rigid. Pin and move of plane is represented by wire frame, updating vertices and possibly faces only after action completion.
Supplemental features to sculpt include graffiti on faces, windows to other polytopes on faces, system calls and icons on faces, sockets to read-only polytopes on faces, jumping through faces to other polytopes, user authentication and kudos for various modifications to polytopes from requests through socket
Wednesday, April 5, 2017
The properties of polytopes that I chose to keep invariant are discontinuities, flatness, colinearity, and convexity. To indicate that two points on a polytope are colinear, or cohyperplanar, I collect the points into boundaries and intersections between boundaries, such that two points are coplanar iff they are in the same boundary. The discontinuities in a polytope occur only where boundaries intersect. To understand convexity, note that intersections between halfspaces are convex. Thus, if a discontinuity is concave, it consists of more than one halfspace intersection of the same intersecting boundaries. I call the halfspace intersections polyants, and specify them as maps from boundary to side. Thus, in an n dimensional space, a vertex has 2^n polyants, and an edge has 2^(n-1) polyants. If more than one polyant of a vertex has points near the vertex in the polytope, then the vertex is not convex, and similarly for edges, and so on. Note that the polytope has only the empty polyant, and the single boundaries each have two polyants. Wrt a boundary in its domain, a polyant is significant iff points in the polyant near the boundary are near points both in and not in the polytope. In fact, a two boundary polyant is significant iff one of the boundaries is significant in the section of the polytope by the other boundary. Thus, a polytope is a graph of polyants. Since a polyant is specified by boundaries and sides, and a graph is a map from polyant to set of polyant, equivalent polytopes are found by permuting the boundaries and mirroring sides across boundaries.
Friday, March 31, 2017
I know nothing about representation theory, but in this context, a representation is a set of tuples. A relation is a set of two element tuples, and a function is a relation in which the first element of each tuple occurs in no other tuple. You can think of a relation as a function with range elements that are sets. Thus, the result of the function is the set of second tuple elements of tuples that have the function input in the first element. You can think of multivariable functions as sets of tuples with more than two elements. In prior posts, I represented space as a matrix of sides, where row(column) indicated boundary, and column(row) indicated region. In my Haskell code, the first representation I chose was a list of lists of region sets. The position in the outer list indicated boundary, and the position in the inner list indicated side. In subsequent representations, I indicated the boundary explicitly, instead of implicitly by list position. I also used representations where the innermost sets are sets of boundaries instead of regions. Whenever I came up with a new representation, I worried whether I could convert between one and another. Now that I understand representations are just sets of tuples, I no longer worry about converting; converting is as simple as changing the order of the elements in the tuples. In future computer architectures, I predict the preferred representation will be sets of tuples. In a computer, a set of tuples could be implemented as a CAM, a content addressable memory. The challenge would be to make the CAMs in the computer completely configurable. Right now, we are limited to RAMs, random access memories, because they are relatively easy to implement. Note that even RAMs are not completely configurable, some sequences of access are more efficient than others, depending on the particular implementation.