tag:blogger.com,1999:blog-35435451670117136062018-04-20T02:38:14.677-07:00sidedness geometryindividkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.comBlogger44125tag:blogger.com,1999:blog-3543545167011713606.post-74681246746005475242018-04-18T13:22:00.000-07:002018-04-18T13:22:01.852-07:00Innovation I like the idea that intelligence is a historical artifact. In the dark ages, good handwriting was considered intelligent. In the dot-com era that we are thankfully leaving, technological imagination was considered intelligent. Because capitalism resists progress, some that made fortunes still have bullhorns to promote their latest ideas. Usually, these ideas are boring to philosophers that already considered the idea and went on to more interesting ideas. For example, consider the idea that the universe is a simulation in another universe. This is as meaningless as religion. Suppose such a simulation existed in our universe. The value, if the word value is to have meaning beyond its anemic economic meaning, is proportionate to its information content. Assuming, as we must, that the longevity of the simulation is proportionate to its value, then its longevity is proportionate to its information content. But that is just a restatement of the Big Bang theory. Furthermore, the notion that the universe consists of information did not originate from technocrats. Physicist have been playing with the idea that the information content of matter falling into black holes is etched on the surface. Therefore technocrats, and more generally capitalists, never invented anything. All innovation comes from the environment.<br /> <div></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-70590810991131183902018-02-22T11:45:00.001-08:002018-02-22T11:55:00.860-08:00Interprocess CommunicationPosix IPC is the only portable standard, but it suffers from lack of generality. Ideally, it should be easy to establish communication, with arguments to a single function, between zero or more processes or threads, identified by uid gid pid tid key and/or path, with optional blocking under various conditions, resumable or not, with or without filesystem presence, with any atomicity. A fundamental concept that was glossed over by Posix is whether the communication is zero one or many to zero one or many. Regular files are any to any, but require sideband book keeping to prevent the same user from rereading already processed communications. Named pipes prevent rereading of communications, but cannot have multiple simultaneous readers. Each kind of Posix IPC has blocking peculiarities, as if the blocking behavior was specified without user friendliness. If I donâ€™t mind filesystem clutter, I can get a regular file to have atomic writes, and block on read from eof. Appends to the regular file go through a corresponding named pipe, so they are non blocking and atomic up to 4K. Each processor of the regular file tries for a writelock of effectively infinite length at eof. To allow for race conditions, check the file size after acquiring the lock, and retry if the lock is not at eof. If the writelock at eof is acquired, block on read from the named pipe. Upon read from the named pipe, append to the regular file, and release the writelock. If the attempt at writelock failed, wait for readlock of one byte after the last byte read. After acquiring the readlock, immediately release the readlock, and read to eof.<br /><div><br /></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-17466749613890357822018-01-06T11:07:00.000-08:002018-01-06T11:07:56.583-08:00Frame ContextI was not there, but I suspect new programming languages are preceded by new ways to think about programs. The distinction between code and data inspired opposing viewpoints supported by Lisp in opposition to C++. Lisp treated code as data, and C++ bundled code with data. A more literal use of the word function led to functional languages. To deal with concurrency, I find myself examining the mechanics of functions. A function is implemented as a frame on a stack. Threads and processes are implemented as context in a circular buffer. I believe the first step toward concurrent functions is to blur the distinction between frame and context. Functions can (re/en)-queue themselves or each other to a circular stack. C++ made the "this" pointer common to its member functions. Similarly, I'd make a sequential state a standard argument to each function on the circular stack. The return value of functions on the circular stack indicate how or whether to re-queue the function. Thus, using the standard argument, a function could pick up from where it last retuned. To promote mix-and-match functions on the circular stack, each function on the circular stack is actually a cluster of functions, re-queued as a cluster, and advanced one to the next with another return value. Thus, the standard argument indicates intra-function location, and the progress through the cluster indicates inter-function location. User specified arguments to the cluster of functions are (re/en)-queued to per-type circular buffers. Tagged arguments allow arguments to be shared between functions on the circular stack. Function clusters have a standard argument called a layer that can be changed during the function invocation, is retained for re-invocation, and copied for new clusters. Thus, function clusters are collected into layers with the same initial layer standard argument. Often, they layer standard argument is used as a tag to share arguments across the layer. This permits a function to en-queue a cluster with one component specified as a function that uses the layer's arguments in addition to the arguments shared by the cluster components.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-59310141033827745242017-12-31T12:49:00.000-08:002018-01-27T13:30:34.044-08:00EntropySome time ago, I read a book by Penrose called "The Emperor's New Mind" that argued against the possibility of artificial intelligence. As indicated in a previous post, I believe artificial intelligence is a misnomer; it should be referred to as hyper-discipline. Penrose's approach was to make plausible alternatives to the assumptions of artificial intelligence researchers. One alternative was that evolution decreases the entropy of life by dissipating the entropy to space. Just as discovery of extraterrestrial intelligence would have more consequence to social paradigm than to scientific advancement, a precise definition of life in terms of entropy would have more consequence to social paradigm than to science. In this post I will attempt to collect some elements of a model of the human condition. With admitted imprecision, I define life as anything that dissipates entropy without creating too much entropy in an environment that dissipates enough entropy, where "too much" and "enough" mean that the entropy of the environment in general decreases, where "in general" is left undefined. Furthermore, I define science as a plane with life and the minimum entropy at the origin. In this scheme, technology is lines in science not through life, and the further technology gets from life, the more entropy it has. To me, the only purpose of wealth is to influence people's opinions. For example, consider Caltrain, the diesel commuter train between San Jose and San Francisco. The beneficiaries of electrification and elevation of this train would include the lower income communities between the expensive communities at its ends, because then they could live inexpensively, and work lucratively. However, a Luddite or NIMBY mentality reduces progress. Who benefits from mentality that reduces progress? My conclusion is that developers are using their wealth to promote mentality that slows progress. A similar argument would apply to climate change. Thus, in my model of the human condition, money is how opinion is shaped, opinion determines which technologies are used, which technologies are used determines the creation of entropy, and the creation of entropy determines sustainability. Note that environments, civilizations, technologies, species, and behaviors that produce more entropy last less long. The moral is that being obliged to discontinue a behavior is less pleasant that choosing to discontinue it.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-1902212445320805982017-10-13T15:02:00.000-07:002017-10-13T15:02:23.598-07:00Stocks and FlowsRecently I have gotten interested in stocks and flows. When combined with feedback and delays, they are revelatory. They are simple, yet their importance is only recently realized. In particular, J W Forrester founded system dynamics in the 60s. Because system dynamics is turning out to be the best hope for salvation from climate change, I feel a little guilty about applying it to my own silly projects. However, the temptation is too great, so I plan to use a dynamic system to make music. The beauty of it is, everything usually done with several different components, such as oscillators filters and sequencers, can be done with just stocks and flows in a topology. In a sense, stocks and flows are the fundamental constituents of musical instruments. Because computers are so fast, the timewheel algorithm is so efficient, and leverage of simple calculations is so great, I feel confident a corpuscular model of stocks and flows will suffice to provide sound of many different qualities. Consider a set of stocks of amounts. Rather than calculate how the amounts change as they flow from stock to stock, instead schedule fixed size corpuscles to be transferred. Large flows are modeled by frequent transfers, and small flows by infrequent transfers. Also scheduled are changes to the flow rates. To model feedback, simply make the changes to flow rates depend on stock amounts, and to model feedback delay, simply schedule the flow changes to take effect some time after their calculation. Because I am already in the middle of a project to model, display, and manipulate polytopes, I will use the polytope faces as flow gates between regions containing stock amounts at points in the regions. Thus polytopes form membranes that in general prevent flow, but have special faces that allow flow dependent on any/all stocks amounts at some prior time. Stocks, as points, reside in areas bounded by overlapping polytopes, and are fed and drained by special faces into their area.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-42379081691442873382017-08-22T11:40:00.000-07:002017-08-22T13:02:43.432-07:00MethodologyAesthetics are the most important thing about a computer language. Lazy or eager, imperative or declarative, object or aspect, template or macro, strong or weak, none matter as much as pleasant to the eye and mind. Assembly had equilength lines. Lisp was conceptually simple, but impossible to look at. C had logical sounding key words, like if, for, goto. C++ had good salesmanship. Haskell has significant indentation. But to properly take advantage of this most crucial feature of Haskell, one must use single space indentation, and only when absolutely necessary. Also, keep lines and identifiers short. Except after >>=, use one character identifiers near the end of the alphabet for lambda arguments, and except for clarity, use one character identifiers near the start of the alphabet for named function arguments. Name lemma functions the same as the main function, except with a single capital letter suffix. Use let instead of where, unless the variable is used in guards or multiple branches. Except to break a rule, never use do notation. Freely pass IO arguments to functions other than >>= and >>. Use <$> and <*> sparingly. Get to the let as soon as possible in IO functions.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-28720291682939869342017-04-07T13:30:00.000-07:002017-04-07T13:39:20.731-07:00Sculpting Polytopes<div><span style="-webkit-text-size-adjust: 100%;">Sculpt </span><span style="-webkit-text-size-adjust: 100%;">displays a polytope</span><span style="-webkit-text-size-adjust: 100%;">, and in interactive mode, the left mouse button (de)selects pierce point(s), changing mode deselects pierce point(s), and the right mouse button switches modes with a menu. Mouse modes are rotation about the pierce point, translation of the pierce point</span><span style="-webkit-text-size-adjust: 100%;">, and rotating about the focal point. Roller button modes are </span><span style="-webkit-text-size-adjust: 100%;">rotation about the pierce point line of sight, scaling from the pierce point, driving forward to and back from the pierce point. Moving</span><span style="-webkit-text-size-adjust: 100%;"> the operating system window also translates, so the model appears fixed behind the screen. Mouse and roller button modes are a matrix of submodes of transform mode. Nontransform modes are random refinement through the pierce point with roller button controlled cursor warp, additive sculpting above the pierce point, subtractive sculpting under the pierce point, and pinning two pierce points and moving a third pierce point by mouse and roller button.</span></div><div><span style="-webkit-text-size-adjust: 100%;"><br /></span></div><div><span style="-webkit-text-size-adjust: 100%;">Directory .script, or the -d directory, has numeric, space, embedding, and polytope representations saved,</span><span style="-webkit-text-size-adjust: 100%;"> timestamped, classified by backlink automatically, and named manually. And .script, or the -d directory, has configurations such as light color and direction, picture plane position and directions, focal length, window position and size, and refine warp.</span></div><div><span style="-webkit-text-size-adjust: 100%;"><br /></span></div><div><span style="-webkit-text-size-adjust: 100%;">Option -i starts interactive mode, -e "</span><span style="-webkit-text-size-adjust: 100%;">script" loads a metric expression to periodically display heuristic animation, -d "dir" changes directory, -n "dir" initializes new directory with current state, -r randomizes the lighting -o "file" saves format by extension, -f "file" loads format by extension, -l "shape" replaces current by builtin shape, -t "ident" changes current by timestamp or name. Options are processed in order, so interactive sessions, animations, directory changes, initializations, loads, and saves can occur in any order. If .script or -d directory does not exist or is empty, it is created with regular tetrahedron, random lighting, and window centered on screen. It is an error for -n directory to exist, for -f file to not exist, or for -o file to exist. Errors are recoverable because directories contain history, and error messages contain instructions on how to recover.</span><br /> <div><span style="-webkit-text-size-adjust: 100%;"><br /></span></div><div><span style="-webkit-text-size-adjust: 100%;">The look and feel of sculpt is turn based. Even metric driven animation only updates the displayed vertices periodically. The rotations and such occur continuously through matrix multiplication, but the model remains rigid. Pin and move of plane is represented by wire frame, updating vertices and possibly faces only after action completion.</span></div><div><span style="-webkit-text-size-adjust: 100%;"><br /></span></div><div><span style="-webkit-text-size-adjust: 100%;">Supplemental features to sculpt include graffiti on faces, windows to other polytopes on faces, system calls and icons on faces, sockets to read-only polytopes on faces, jumping through faces to other polytopes, user authentication and kudos for various modifications to polytopes from requests through socket</span></div><div></div></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-42753259289084734202017-04-05T16:48:00.001-07:002017-04-05T16:48:06.140-07:00Classifying PolytopesThe properties of polytopes that I chose to keep invariant are discontinuities, flatness, colinearity, and convexity. To indicate that two points on a polytope are colinear, or cohyperplanar, I collect the points into boundaries and intersections between boundaries, such that two points are coplanar iff they are in the same boundary. The discontinuities in a polytope occur only where boundaries intersect. To understand convexity, note that intersections between halfspaces are convex. Thus, if a discontinuity is concave, it consists of more than one halfspace intersection of the same intersecting boundaries. I call the halfspace intersections polyants, and specify them as maps from boundary to side. Thus, in an n dimensional space, a vertex has 2^n polyants, and an edge has 2^(n-1) polyants. If more than one polyant of a vertex has points near the vertex in the polytope, then the vertex is not convex, and similarly for edges, and so on. Note that the polytope has only the empty polyant, and the single boundaries each have two polyants. Wrt a boundary in its domain, a polyant is significant iff points in the polyant near the boundary are near points both in and not in the polytope. In fact, a two boundary polyant is significant iff one of the boundaries is significant in the section of the polytope by the other boundary. Thus, a polytope is a graph of polyants. Since a polyant is specified by boundaries and sides, and a graph is a map from polyant to set of polyant, equivalent polytopes are found by permuting the boundaries and mirroring sides across boundaries.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-38863321477069013352017-03-31T07:46:00.000-07:002017-03-31T08:57:48.724-07:00RepresentationsI know nothing about representation theory, but in this context, a representation is a set of tuples. A relation is a set of two element tuples, and a function is a relation in which the first element of each tuple occurs in no other tuple. You can think of a relation as a function with range elements that are sets. Thus, the result of the function is the set of second tuple elements of tuples that have the function input in the first element. You can think of multivariable functions as sets of tuples with more than two elements. In prior posts, I represented space as a matrix of sides, where row(column) indicated boundary, and column(row) indicated region. In my Haskell code, the first representation I chose was a list of lists of region sets. The position in the outer list indicated boundary, and the position in the inner list indicated side. In subsequent representations, I indicated the boundary explicitly, instead of implicitly by list position. I also used representations where the innermost sets are sets of boundaries instead of regions. Whenever I came up with a new representation, I worried whether I could convert between one and another. Now that I understand representations are just sets of tuples, I no longer worry about converting; converting is as simple as changing the order of the elements in the tuples. In future computer architectures, I predict the preferred representation will be sets of tuples. In a computer, a set of tuples could be implemented as a CAM, a content addressable memory. The challenge would be to make the CAMs in the computer completely configurable. Right now, we are limited to RAMs, random access memories, because they are relatively easy to implement. Note that even RAMs are not completely configurable, some sequences of access are more efficient than others, depending on the particular implementation.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-89045714156303236592017-03-29T13:28:00.000-07:002017-03-29T13:28:17.043-07:00The Choose FunctionIn my Haskell code, I chose to represent sets as lists. In practice, this means I have to prevent duplicates in the lists that represent sets. Just to be difficult, I also use lists for ordered pairs of same type things, or maps from index to things. If the things are of different type, then ensuring there are not duplicates is a no-brainer and they might as well be ordered, so I use tuples. This is all very mathematical, and functions resemble constructive proofs. Where computational functions differ from proofs is in which shortcuts are taken. Proofs don't care how long they take to execute. Computer functions are expected to complete in a reasonable time. I use a function called choose to document when I am deviating from the mathematical ideal to make a function more computational. Choose is defined as head; it returns the first element of the list. Choose is intended for use only on lists representing sets, but that is not the only intended restriction on its use. Choose is intended for use when any element of the set would be correct, and the choice of element changes the result of a function it is directly or indirectly used in. Where the choice would not affect the result, I use head. The reason this makes the functions using choice less mathematical and more computational is that the alternative is to return all valid results, not just one valid result. For example, my superSpace function uses choose often to simplify the computational problem, even though the simplest solution to the mathematical problem is that there are multiple solutions.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-75053818745706868022017-03-24T13:23:00.000-07:002017-03-24T15:11:08.578-07:00RegularityTill now, I have focussed on irregular spaces and polytopes without coincidental boundaries, but spaces with missing regions have been a nagging possibility. For example, if a space is caught in the process of migrating, then the migration itself is like a sidedness space with a missing region, or an affine space with more than the dimension's worth of boundaries through a point. Parallelism is also a a form of degenerate space. Imagine a migration of a round space, where the migrating region is an outside region in a flat rotation space of the round space. Now consider embeddings of polytopes into spaces. Since an embedding is a subset of the regions in the space, one can specify the embedding as the degenerate space consisting of just the embedded regions. Restoring the degenerate space to a linear space is one to many, but each restoration is a valid embedding of the same polytope. Now note that regular polytopes contain parallel sides. Irregular and regular polytopes are both degenerate spaces; the only distinction is whether there is a partial restoration to a space missing outside regions.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-64260632316075944622017-03-23T15:04:00.001-07:002017-03-23T17:12:26.462-07:00Minimum Equivalent<br /><div></div><br /><div>To find equivalence classes of spaces, and of polytopes, it is impractical to try all permutations of a representation to find if it makes one equal to another. Assuming permutations of representations are comparable as greater or lesser, an approach would be to find the permutation that minimizes a representation the most. Then all equivalent representations would minimize to the same member of the equivalence class. To find the minimizing permutation, choose transpositions such that the first transposition chooses which identifier will be the smallest. Of course, multiple transpositions could cause the smallest identifier to be first in the representation. Some transposition sequences would get weeded out only after subsequent transpositions failed to compete with their peers. Weeding out transposition sequence prefixes as unable to recover potential after failing to put the smallest identifiers first in the representation is justified because no subsequent transposition affects the position of lesser identifiers. This algorithm is not O(n), but it is potentially, and I suspect in practice, much better than the O(n!) of the naive approach of trying every permutation. Since I needed to use the same algorithm for both spaces and polytopes, I created a Haskell class of permutation types. Each permutation type must implement refine and compare functions. A refine function returns a list of permutations, each with one more transposition than the given permutation. A compare function takes two partial permutations and compares their application to a representation. To accomplish the compare function, each type in the class is actually a wrapper around an original representation and a partial permutation of that representation.</div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-7487321410549674282017-03-17T16:05:00.001-07:002017-03-17T16:10:19.208-07:00Simplex Overlap Equivalnce<div>What is a polytope? For one thing, it has flat sides. For another, two polytopes can be equivalent without being the same. Anecdotally, all right triangles are in some sense equivalent. I would argue that the various kinds of simplex are limits of sequences of irregular simplices, and all irregular simplices are equivalent up to dimension. I can define polytope as a collection of maximal super-region spaces, together with their intersection super-region spaces. The intersection spaces are necessary because otherwise the relation between the maximals would not be captured. I can convert any finite union of intersections of real vector halfspaces to a polytope, and any sufficiently small tweak of the halfspaces produces the same polytope. Furthermore, my personal criterion for polytope equivalence is satisfied. Simplex overlaps are equivalent iff they are equivalent as polytopes. In other words, a simplex overlap is two maximal super-regions, related by their one intersection. A maximal super-region space is one of a super-region not contained by any other super-region covered by the polytope's regions embedded in the space. <span style="-webkit-text-size-adjust: 100%;">A super-region space is a space with all boundaries attached to an inside region. Much as section spaces uniquely identify extensions of a space, super-region spaces uniquely identify super-regions of a space. To find the section space, take the regions attached on one side of the extension boundary</span>. To find the super-region space, take the boundaries attached to the super-region. See below for a proof that there is at most one inside region attached to all boundaries. Thus, a polytope is a collection of super-region spaces together with sidednesses of the super-region wrt attached boundaries. A super-region space is specified as a map from attached boundary to the super-region space of the facet of the boundary. For example, a three dimensional super-region space has a face per attached boundary, each face has a segment per significant boundary, and each segment has a two significant vertex boundaries. A facet is identified by the set of boundaries in the boundary map path that leads to it. Thus, the traditional definition of polytope as a graph of facets suffices for convex polytopes, and for concave polytopes, the definition above suffices.<br /><span style="-webkit-text-size-adjust: 100%;"><br /></span></div><div>Super Region Theorem</div><br /><div>There is at most one inside region attached to all boundaries. Suppose two points, p and q, are in regions attached to all boundaries. Toss out boundaries without putting either p or q in outside regions, and without putting p and q the same region. Finally every boundary makes one or both of p and q outside or makes p and q the same. If there were a boundary, b, that separates only p from an outside region, then b is not attached to q, so all boundaries except the mutual boundary, c, separate both p and q from outside regions. Consider two outside regions, r and s, opposite p and q<span style="-webkit-text-size-adjust: 100%;"> wrt boundary b. R and s must be opposite each other wrt c because r and s are the same after removing c. Thus every outside region is attached to c, so there are 6 regions total. This is not possible in any dimension, so there must be at most one region attached to all boundaries.</span></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-1445015730566872602017-01-30T12:04:00.000-08:002017-01-30T12:04:35.738-08:00PoliticsWell, politics and polytope both start with pol, right? Anyway, my poster for the upcoming March for Science (https://www.marchforscience.com/) is "Trump = Disease / Science = Cure". This is a 2^2, one-to-one, 2d-sub-simplex. It brings to mind an interesting way to think about affine spaces. They are collections of one-to-one mappings on equal-sized subsets of regions. In other words, each boundary is a one-to-one mapping between the regions attached to it on one side, and the regions attached in the other side. The universe has many more regions than just those attached to a particular boundary. Wrt Trump's ban on people from countries that Trump does not currently have business interests in, I believe it, in addition to other attacks on democracy, will reduce terorism against the US. In general, anything undemocratic, such as gag orders, and appointment of corrupt oligarchs to cabinet positions, decreases the threat of terorism, because it is not authoritarianism that scares terrorists; democracy scares terrorists. But let's face it, corruption in the White House is a much bigger threat than terrorism. Much more people will die from global warming than from terrorism. Trump's priorities are messed up. So get out there and protest, and don't stop until the Koch brothers are bankrupt, Trump's name is removed from the towers, and former coal miners, loggers, truckers, factory workers, and other red-necks are installing solar panels, building hydroponics, riding bicycles, and restoring wetlands.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-64428264029683508472016-12-01T09:55:00.000-08:002016-12-07T12:11:55.236-08:00Metaphysics<div>In this post I will exercise my brand of metaphysics. At least it is harmless, right? I am not very well read, but from what I understand from sound bites, I think I'm a Cartesian. The separation between mind and body is hogwash, but I do believe life has emergent properties including intelligence and discipline.</div><div><br /></div><div>Just as science is less one-dimensional than religion, there may well be a discipline that is less two-dimensional than science. Since genius greater than Einstein is not likely, super-intelligence is required for three-dimensional discipline. Just as the internet does not make people less religious, we need not fear pa/matricide at the hands of super-intelligent progeny. On the contrary, we can look forward to a super-intelligent constitution that guarantees freedom of science, just as our atheist founding fa/mothers enshrined freedom of religion in the same amendment that protects the fourth estate. What wonders will super-intelligence conflate with freedom of science?</div><div><br /></div><div>Just as Sagan condescended to explain science in one-dimensional religious terms like "numinous", super-intelligence will likely explain three-dimensional discoveries in two-dimensional mathematical terms. In fact, just as Mendel had a prominent one-dimensional section of his essentially two-dimensional existence, a super-intelligence might partake exuberantly of science.</div><div><br /></div><div>In short, though religion has the two poles of spirit and matter, science has the two dimensions of theory and experiment. Our experience of this difference is that religion lacks the uncertainty of science.</div><div><br /></div><div>With regard to politics which science fiction is supposed to be about, capitalism has failed as miserably as communism. Technology, the product of science, is necessarily a mere section of science. We can expect super-intelligence to produce many disciplines, each as de/constructive as science. Indeed, we may think of science as the plane constructed through two lines, art and religion, that share a point, life. Similarly, super-intelligence will be constructed as the space that contains two sciences that share a single technology. Perhaps the sciences are finite physics and not-even-wrong physics, and the technology is quantum-bio-computing. It is doubtful that super-intelligence will save us from nuclear weapons or global warming. After all, science did not save religion from denial or hypocrisy, nor art from post-modernism.</div><div><br /></div><div>On an alien planet with two different kinds of life, a single religion connecting the two kinds of life might form even before intelligence. And on a planet with three kinds of life, science might be integral to evolution. On our planet, the leap from life to religion may have required as much intelligence as the leap from religion to science. Thus, super-intelligence is a misnomer; it should be called hyper-science.</div><div><br /></div><div>A flaw in my argument is how different religion is from technology. This may be because religion is merely linear, whereas technology is affine. If religions, and by extension science, are linear, then life must be the origin. And as we know, life is not contained by technology. Also, there are things like sport that are neither religion nor product of science. Consider the ancient Brazilian sport of playing with a rubber ball. Note that the rubber ball was a product of technology, and sport contains life. Thus, sport is the one-dimensional discipline constructed through life and a product of a technology.</div><div><br /></div><div>At this point, I hope it is clear that I consider religion a derangement. Rather than decrease my respect for religion, this increases my respect for derangement. Since I know of no other way to construct a plane except from lines, I am forced to consider religion a necessary stepping stone. And since our experience with fossil fuel demonstrates the folly of burning bridges, I must accede the utility of keeping religion around.</div><div><br /></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-44614224842116635732016-11-20T15:38:00.000-08:002016-11-26T14:52:54.684-08:00Electronic Music<div>Here is an application intended to draw emergent properties from general purpose computers. This is in the tradition of the west coast modular synth, in the sense that it sacrifices ease of use for surprise.</div><div><br /></div><div>The mouse position rotates a polytope. The roller button rotates the projection plane. The buttons switch between ends of the projection line. The polytope region volumes are noise volumes. The space boundaries are tone filters. The projection plane areas are durations. The projection plane boundaries are envelope filters. The projection line lengths are tempos. The projection line boundaries are rhythmic filters. Function keys (un)lock boundaries, (un)fill regions. Configuration file contains metric and boundary mappings. Branching history allows branching playback. Create new branches relative to other branches.</div><div><br /></div><div>Just as the effect of multiple color filters is cumulative, so is the effect of multiple tone filters. Similarly, the effect of multiple envelope or rhythm "filters" should be cumulative. Thus, retained are the volume area length metrics, but lost are the filter orderings, except as deduced from adjacency.</div><div><br /></div><div>The product of the three dimensional tone filters is the richest in the sense that they are repeating waveforms. The two dimensional filters, though geometrically more numerous, are less rich in the sense that they produce unrepeated envelopes, or potions of waveforms. Note that the specification of envelope filter is still a vector of harmonic amplitudes. The one dimensional filters are again geometrically more numerous, but less rich, thus. They not only produce unrepeated phrases, they interpret the negative portions of the waveform as silence, ignoring any negative information.</div><div><br /></div><div>Another enhancement would be feedback. Thus, when a note plays, the region responsible for it could go on to the next fundamental tone. Generally, each projected region could go on to the next parameter. To avoid expressing my incompetence at harmony and melody, the parameters could be chosen by 1/f randomness or modulo one golden ratio increments.</div><div><br /></div><span style="font-family: "times new roman";">Escape key halts recording and rotation, allowing mouse lift. Continuity is preserved across pauses by interpreting mouse and roller button motions as accelerations instead of motions. Discontinuities are allowed by enabling rotation and rate change separate from recording. Loops are recorded by interpolating from one branch to another and going to playback mode. Switching to record mode starts a new branch relative to the current in the sense that the rotation rates start from where the branch started.</span>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-20018433651643107762016-11-20T12:39:00.000-08:002016-11-21T11:53:24.847-08:00Intervals Are Not PolytopesOne dimensional spaces are qualitatively different from spaces with 2 or more dimensions. My definition of "linear", as a space who's subspaces have the correct number of regions does not suffice to make one-dimensional spaces convertible to number lines. Here is a counter-example.<br /><br />[[[0],[1,2,3]],[[1],[0,2,3]],[[2],[0,1,3]]]<br /><br />Interpret this as a list of boundaries, each of which is a pair of half-spaces, and interpret the numbers as regions. As a two-dimensional space this is a simplex with empty vertex regions. As a one-dimensional space, each subspace has m+1 regions, where m is the number of boundaries. As a result of testing my Haskell code, I discovered this counter-example. I rewrote my isLinear and superSpace functions to behave differently for one-dimensional arguments.<br /><br />Stay tuned for whether there are counter-examples of 2 or more dimension. If the proofs in previous posts are correct, then there will be none. Math is scientific in the sense that we can never know for certain that a proof is correct.<br /><br />individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-86904591963308784972016-11-06T08:16:00.000-08:002016-11-06T08:16:03.536-08:00JourneyI've restarted my Haskell program to take a more naive approach. Rather than go for optimization with lots of different representations that get saved to prevent recalculations, my new approach is to go for clarity. I found that the prior approach involved a lot of boring code. I was too tempted to automate the production of the boring code, and that became a nontrivial challenge. Anyway, my new code is here, and it is much more readable. Note that though the repository is still called sidegeo, the module directory is AffTopo. AffTopo stands for affine topology, which I believe more accurately describes the math in this blog than my original choice of sidedness geometry.<div><br /></div><div>https://github.com/individkid/sidegeo/blob/master/src/AffTopo/Naive.hs</div><div><br /></div><div>Perhaps more or less related to my coding efforts, I have taken a new perspective (pun intended) on the definition of polytope. In short, polytope (like creativity) means many things to many people in many contexts. Here are a couple of possible definitions.</div><div><br /></div><div>If you project your polytope onto a picture plane, it produces a (simpler?) polytope in one fewer dimension. If you consider all possible projections of a polytope, the polytope might be well defined so long as it's projections are well defined. Since polytopes of zero dimension are well defined, this recursive definition might work. I say might, because the projections of a polytope have some relations not captured by simply collecting them into a set.</div><div><br /></div><div>As another example of defining polytope, start with the usual graph of facets, and add convexity around each facet. My coding experience has increased my respect for directness. On the other hand, without curves we would not know that lines are straight.</div><div><div><br /></div></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com1tag:blogger.com,1999:blog-3543545167011713606.post-18673539086450067902016-03-29T09:26:00.001-07:002016-04-09T14:46:17.070-07:00Hello HaskellI'm proud of my hello world because it demonstrates my understanding of lazy evaluation. Lazy evaluation appeals to me because it reminds me of cause and effect<span style="-webkit-text-size-adjust: auto; background-color: rgba(255, 255, 255, 0);"> in its relative and quantum laziness</span>. This, my first Haskell program, is the middle step of a program to produce code that implements conversions implicit in the name of the converter.<br /><div><br /></div><div><a href="https://github.com/individkid/sidegeo/tree/master/src/Implicit">https://github.com/individkid/sidegeo/tree/master/src/Implicit</a></div><div><br /></div><div><br /></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-10301235784087242892016-02-20T17:55:00.000-08:002016-02-23T10:29:41.870-08:00No Mention of DimensionRecently I have started using Haskell instead of those other languages. As a result I have gotten closer to generating all spaces and polytopes of a given complexity. An observation results from this progress. None of my representations or functions, from sidedness to cospace, depend on dimension. Haskell being strongly typed means the fact that my code compiles is nontrivial. I still need to remove any runtime bugs that crop up, but I'm confident I'll be able to generate all simplex overlaps with merely implicit dimension. The way to imply a dimension is to make a simplex by remove one region from a space with no missing regions. Then the number of dimensions is one less than the number of boundaries in the simplex space. To create more complex spaces, find the cospace which I can do without mention of the implicit dimension. With the cospace, you can extend the simplex with a sections space determined by any region in the cospace. In previous posts, I constructed the cospace by interpreting points as planes, after converting to vectors. That required choosing a partial ordering of the regions. Constructing a cospace without vectors does not require choosing a partial ordering. Because only outside regions have regions opposite all boundaries, inside regions can be specified by boundary sets. Thus, a space can be represented by a set of boundary sets. A similar representation of regions as boundary sets is also a good way to test equivalence between spaces. With that representation, only permutations of boundaries must be tried; permutations of regions are abstracted away. Trying permutations of boundaries does not suffice to find polytope equivalence. This is because a polytope is specified by a set of spaces constructed from the significant boundaries of significant vertices in the polytope, and vertex spaces have only outside regions.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-54503105305319030642016-01-26T13:30:00.001-08:002016-01-26T13:54:41.977-08:00Dooy Binary SystemThe Dewey Decimal System is inadequate for online content. More than one dimension is required. But who chooses the dimensions? Rather, let people partially specify boundaries and points, and let there be as many dimensions as necessary to keep the space linear. By partially specify I mean they would create new boundaries by supplying two disjoint sets of points. Also, they could refine a boundary's position by adding points to the boundary's sets. The given points could be new or extant. Only adding a point to both sets of a boundary would be prohibited. An open question is whether books (pages? words?) should be points, boundaries, or regions. My intuition is that one to one between point and book would be best. Then books (pages? words?) in the same region would be similar.<div><br></div>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-1827772256310538652015-11-24T13:40:00.001-08:002015-11-24T23:03:42.711-08:00Redefined AgainIn previous posts I have attempted to define polytope but succeeded only in defining interesting things such as migration, convex cover, disjoint cover, round space, cospace, significant vertex. In this post I shall try again. With regard to a significant vertex V of a set of regions R, a boundary B is significant iff two regions R0 and R1 in the pencil of V are neighbors wrt B, R0 is in R, and R1 is not in R. Wrt significant vertex V of a set of regions R, a region is significant iff it is in the intersection of R and the pencil of V. Define polytope as a collection of round spaces with region sets, such that the centers of the round spaces are the significant vertices, the boundaries of the round spaces are the significant boundaries wrt center, and the region sets are the significant regions wrt center. The fact that the round spaces share boundaries captures the facet graph and coincidences of the polytope. The sidedness of the region sets in the round spaces captures the convexities around the vertices. I believe this definition is adequate to determine whether two sets of regions from the same or different spaces are embeddings of the same polytope. However, I can think of plenty of round space collections that do not embed in any flat spaces.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-21832909909175776302015-11-08T16:45:00.001-08:002015-11-08T17:18:58.654-08:00DomesMy understanding is that designing geodesic domes is nontrivial because there are so many choices. Here I present a way to find every dome up to complexity. Consider all sections of all spaces up to complexity. The cospace of the vertices on one of those sections has all planes through a point. Designate that point the center of a dome. The surface of the dome is the round space as defined in the "Coincidence" post. Note that not every region of the round space need be a facet of the dome. Some dome facets can be super-regions in the round space.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-2393056822727961592015-10-30T13:59:00.001-07:002015-11-02T09:45:44.859-08:00Vertex Cospace PolytopeIn n dimensions, each set of n boundaries is a vertex. As in the algorithm to construct an affine plane through regions, we can interpret vertices as planes to obtain a space called a cospace. Cospaces are not strictly linear. Some nonempty regions can be degenerate in the sense that more than n planes can pass through the same point in n dimensions. For example, if we construct a cospace from the vertices that lie on a particular boundary, all planes in the cospace pass through the same point. I propose that any polytope, no matter the nature of its facets, convexities, or colinearities, can be identified by the cospace of the polytope's significant vertices. <span style="font-family: 'Helvetica Neue Light', HelveticaNeue-Light, helvetica, arial, sans-serif;">A vertex is significant if some polyant of each proper superpencil is improper wrt the regions in the polytope. A superpencil is a pencil of a subset of the boundaries of a pencil. A pencil of a set of boundaries is the set of regions with neighbors wrt the boundaries. A region is a neighbor wrt boundaries if it's sidednesses are the same, except opposite for the boundaries. A polyant is the regions on the same side of a set of boundaries. A polyant of a pencil is the intersection between pencil and polyant. </span>individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0tag:blogger.com,1999:blog-3543545167011713606.post-43117545663962081162015-06-29T18:28:00.000-07:002015-06-30T11:56:11.492-07:00ConnectednessIn an undirected graph, how can I efficiently keep track of how many connected parts it has. I'd like to list the nodes such that all nodes of a connected part are together. In other words, if the graph has more than one connected part, I can divide the list between two consecutive nodes, such that no connected part of the graph has nodes in both lists. Consider a list of the nodes, and connect the entries of the list by edges from the graph. Some edges connect consecutive nodes, and some edges hop over nodes in the list. If, as edges from the graph are added to the list, the list is reordered such that the number of hops is minimized, then after all edges have been added, connected parts of the graph will be together in the list. Suppose nodes of a graph are listed in some order, and the graph has two connected parts. Consider a sublist of just the nodes from one connected part in the same order as the entire list. Count the hops in the sublist. Similarly count the hops in the sublist of the nodes in the other connected part. If the sublists are dividable in the entire list, one entirely before the other, then the number of hops in the entire list is just the sum of the hops in the sublists. If the sublists are not dividable, then the number of hops is greater than the sum. Therefore, if the number of hops in the list is minimum, connected parts of the graph are dividable between two consecutive nodes in the list. Suppose I add edges to a list one at a time. I claim I can reorder the list so that the number of hops stays minimal. Suppose a list, L, with some edges has minimal hops, and I have an edge to add. Transpose the ends of the edge with consecutive nodes to shorten the new edge, so long as the number of hops is reduced. I claim the result has the minimum number of hops. Suppose the contrary, that there is a list with fewer hops. Consider that list with the new edge removed. That list has fewer hops than L. This contradiction proves my claim.individkidhttp://www.blogger.com/profile/12926316060956277498noreply@blogger.com0