## Friday, March 31, 2017

### Representations

I know nothing about representation theory, but in this context, a representation is a set of tuples. A relation is a set of two element tuples, and a function is a relation in which the first element of each tuple occurs in no other tuple. You can think of a relation as a function with range elements that are sets. Thus, the result of the function is the set of second tuple elements of tuples that have the function input in the first element. You can think of multivariable functions as sets of tuples with more than two elements. In prior posts, I represented space as a matrix of sides, where row(column) indicated boundary, and column(row) indicated region. In my Haskell code, the first representation I chose was a list of lists of region sets. The position in the outer list indicated boundary, and the position in the inner list indicated side. In subsequent representations, I indicated the boundary explicitly, instead of implicitly by list position. I also used representations where the innermost sets are sets of boundaries instead of regions. Whenever I came up with a new representation, I worried whether I could convert between one and another. Now that I understand representations are just sets of tuples, I no longer worry about converting; converting is as simple as changing the order of the elements in the tuples. In future computer architectures, I predict the preferred representation will be sets of tuples. In a computer, a set of tuples could be implemented as a CAM, a content addressable memory. The challenge would be to make the CAMs in the computer completely configurable. Right now, we are limited to RAMs, random access memories, because they are relatively easy to implement. Note that even RAMs are not completely configurable, some sequences of access are more efficient than others, depending on the particular implementation.

## Wednesday, March 29, 2017

### The Choose Function

In my Haskell code, I chose to represent sets as lists. In practice, this means I have to prevent duplicates in the lists that represent sets. Just to be difficult, I also use lists for ordered pairs of same type things, or maps from index to things. If the things are of different type, then ensuring there are not duplicates is a no-brainer and they might as well be ordered, so I use tuples. This is all very mathematical, and functions resemble constructive proofs. Where computational functions differ from proofs is in which shortcuts are taken. Proofs don't care how long they take to execute. Computer functions are expected to complete in a reasonable time. I use a function called choose to document when I am deviating from the mathematical ideal to make a function more computational. Choose is defined as head; it returns the first element of the list. Choose is intended for use only on lists representing sets, but that is not the only intended restriction on its use. Choose is intended for use when any element of the set would be correct, and the choice of element changes the result of a function it is directly or indirectly used in. Where the choice would not affect the result, I use head. The reason this makes the functions using choice less mathematical and more computational is that the alternative is to return all valid results, not just one valid result. For example, my superSpace function uses choose often to simplify the computational problem, even though the simplest solution to the mathematical problem is that there are multiple solutions.

## Friday, March 24, 2017

### Regularity

Till now, I have focussed on irregular spaces and polytopes without coincidental boundaries, but spaces with missing regions have been a nagging possibility. For example, if a space is caught in the process of migrating, then the migration itself is like a sidedness space with a missing region, or an affine space with more than the dimension's worth of boundaries through a point. Parallelism is also a a form of degenerate space. Imagine a migration of a round space, where the migrating region is an outside region in a flat rotation space of the round space. Now consider embeddings of polytopes into spaces. Since an embedding is a subset of the regions in the space, one can specify the embedding as the degenerate space consisting of just the embedded regions. Restoring the degenerate space to a linear space is one to many, but each restoration is a valid embedding of the same polytope. Now note that regular polytopes contain parallel sides. Irregular and regular polytopes are both degenerate spaces; the only distinction is whether there is a partial restoration to a space missing outside regions.

## Thursday, March 23, 2017

### Minimum Equivalent

To find equivalence classes of spaces, and of polytopes, it is impractical to try all permutations of a representation to find if it makes one equal to another. Assuming permutations of representations are comparable as greater or lesser, an approach would be to find the permutation that minimizes a representation the most. Then all equivalent representations would minimize to the same member of the equivalence class. To find the minimizing permutation, choose transpositions such that the first transposition chooses which identifier will be the smallest. Of course, multiple transpositions could cause the smallest identifier to be first in the representation. Some transposition sequences would get weeded out only after subsequent transpositions failed to compete with their peers. Weeding out transposition sequence prefixes as unable to recover potential after failing to put the smallest identifiers first in the representation is justified because no subsequent transposition affects the position of lesser identifiers. This algorithm is not O(n), but it is potentially, and I suspect in practice, much better than the O(n!) of the naive approach of trying every permutation. Since I needed to use the same algorithm for both spaces and polytopes, I created a Haskell class of permutation types. Each permutation type must implement refine and compare functions. A refine function returns a list of permutations, each with one more transposition than the given permutation. A compare function takes two partial permutations and compares their application to a representation. To accomplish the compare function, each type in the class is actually a wrapper around an original representation and a partial permutation of that representation.

## Friday, March 17, 2017

### Simplex Overlap Equivalnce

What is a polytope? For one thing, it has flat sides. For another, two polytopes can be equivalent without being the same. Anecdotally, all right triangles are in some sense equivalent. I would argue that the various kinds of simplex are limits of sequences of irregular simplices, and all irregular simplices are equivalent up to dimension. I can define polytope as a collection of maximal super-region spaces, together with their intersection super-region spaces. The intersection spaces are necessary because otherwise the relation between the maximals would not be captured. I can convert any finite union of intersections of real vector halfspaces to a polytope, and any sufficiently small tweak of the halfspaces produces the same polytope. Furthermore, my personal criterion for polytope equivalence is satisfied. Simplex overlaps are equivalent iff they are equivalent as polytopes. In other words, a simplex overlap is two maximal super-regions, related by their one intersection. A maximal super-region space is one of a super-region not contained by any other super-region covered by the polytope's regions embedded in the space. A super-region space is a space with all boundaries attached to an inside region. Much as section spaces uniquely identify extensions of a space, super-region spaces uniquely identify super-regions of a space. To find the section space, take the regions attached on one side of the extension boundary. To find the super-region space, take the boundaries attached to the super-region. See below for a proof that there is at most one inside region attached to all boundaries. Thus, a polytope is a collection of super-region spaces together with sidednesses of the super-region wrt attached boundaries. A super-region space is specified as a map from attached boundary to the super-region space of the facet of the boundary. For example, a three dimensional super-region space has a face per attached boundary, each face has a segment per significant boundary, and each segment has a two significant vertex boundaries. A facet is identified by the set of boundaries in the boundary map path that leads to it. Thus, the traditional definition of polytope as a graph of facets suffices for convex polytopes, and for concave polytopes, the definition above suffices.

Super Region Theorem

There is at most one inside region attached to all boundaries. Suppose two points, p and q, are in regions attached to all boundaries. Toss out boundaries without putting either p or q in outside regions, and without putting p and q the same region. Finally every boundary makes one or both of p and q outside or makes p and q the same. If there were a boundary, b, that separates only p from an outside region, then b is not attached to q, so all boundaries except the mutual boundary, c, separate both p and q from outside regions. Consider two outside regions, r and s, opposite p and q wrt boundary b. R and s must be opposite each other wrt c because r and s are the same after removing c. Thus every outside region is attached to c, so there are 6 regions total. This is not possible in any dimension, so there must be at most one region attached to all boundaries.

Subscribe to:
Posts (Atom)