Graphical Supersymmetry Algebras

Thanks to Dr. James Hughes, Matthew Lynn, Chris Walker, Chas Leichner, Alex Ray, Alex Zhu, Dr. Cornelia Van Cott, Paul Sebexen, Colin Aitken, Chuck Moulton, Sebastien Zany, and Nick for joining me in playing with this problem.

What is an adinkra?

Adinkras are colored connected simple graphs, that are bipartite and n-regular.

Adinkras are a graphical representation of supersymmetry algebras.

The prefix super- comes from the theory of supersymmetry in theoretical physics. Superalgebras and their representations, supermodules, provide an algebraic framework for formulating supersymmetry.

A superalgebra is a \(\mathbb{Z}_2\)-graded algebra. In other words, it is “an algebra over a commutative ring or field with decomposition into ‘even’ or ‘odd’ pieces and a multiplication operator that respects the grading.”

What is an odd adinkra?

An adinkra is odd if the sum of every 4-cycle in the adinkra is odd.
2
To create an odd adinkra: we assign 0 or 1 to every edge of an n-dimensional cube such that, for each face of the cube, the sum of the edges of the face is odd.

How many unique n-dimensional odd adinkras exist?

Before we attack this, let’s elaborate on the problem:

First some notation must be fixed. By \(E(G)\), we mean the edges of the graph \(G\), and by \(Q_n\), we mean the graph corresponding to the \(n\)-dimensional cube. The formal statement is then as follows.

How many edge 2-colourings \(\phi : E(Q_n) to {0,1}\) exist such that, for each 4-cycle \(C\) of \(Q_n\), the sum of the edge colours is odd: \(\sum_{e \in E(C)} \phi(e) = 1\) mod \(2\) ?

Note that we bastardize the term “edge n-colourings” \(\equiv\) assignment of \(x \in {0,1,..,n-1}\) to every edge.

Breaking Down The Problem: The Overture

Let \(e_n\) denote the number of edge colourings of \(Q_n\) for which the sum of the edge colours is even, and let \(o_n\) denote the number of edge colourings for which the sum is odd. As \(Q_0\) contains a single vertex and no edges, there are no edge colourings with either property, so we set \(e_0 = o_0 = 1\).

Now \(Q_1\) contains two vertices with a single edge between them, so \(e_1 = o_1 = 1\). The slightly more interesting graph \(Q_2\) corresponds to a square, which can have its edges 2-coloured in \(2^4\) ways. Half of which will correspond to an even sum, the other half to an odd, so \(e_2 = o_2 = 8\). The graph \(Q_3\) can be 2-coloured in \(2^{12}\) ways, half of which must be even and the other half odd, so \(e_3 = o_3 = 2^{11}\).

Conjecture: For all \(n\), \(e_n = o_n = 2^{|E(Q_n)| – 1}\).

When constructing \(Q_{n+1}\) from \(Q_n\) we are duplicating the graph \(Q_n\) and connecting every vertex of the duplicate to the original.

Combinatorial Approach

For a 2-cube: we have 2 different states [the sides sum to 1 or 3], and 4 unique rotational variants of each state. \(2*4 =8\)

Can we write this in a closed form? The set of all unique states of the cube is \(2^8\).

We have 8 different choosing points, one for each of the 8 edges.
For each of these choosing points, we have 2 different options: pick 0 or 1.

0

What about for higher dimensions?

Screenshot from 2014-07-26 17:31:21
Screenshot from 2014-07-26 17:31:40
Screenshot from 2014-07-26 17:31:10
Screenshot from 2014-07-26 17:30:53

For the front face of the cube, we have \(2^{2^{n-1}-1}\)

For the back face of the cube, we also have \(2^{2^{n-1}-1}\)

The state of this edge is completely determined by this edge [following an intuition we will confirm in the following section]. Therefore, instead of adding 4 more options, we only add 2.

This gives us \(2^{2^{(n−1)}} \cdot 2^{2^{(n+1)}} \cdot 2^1 = 2^{2^n-1}\). \(_\blacksquare\)

Visually Formalizing our Intuition

Until we prove our hunch is correct, there is a hole in the induction step where one would need to show that many ways to glue to n-1 dimensional hypercubes together exist.

Intuitively, as soon as you fill in 1 edge (0 or 1) that fixes the rest of them \(\rightarrow\) meaning that there are only 2 valid ways to connect any 2 \(n\)-dimensional cubes into a (\(n+1\))-dimensional cube.

This can be visually formalized and inductively proven:

We can add the corresponding edges of the (n-1)-cubes to get a new (n-1) cube, and then the old problem of coloring the edges between the two cubes becomes the problem of coloring vertices so that the sum of the colors of two vertices is the same as the number on the edge between them.

RinsProblem

Computational Attacks

Screenshot from 2014-07-26 17:31:56

All code used in this post is on my github.

generate_hypercube.py

The set-theoretic approach is just a cool way to think about graph generation.

# Hypercube Problem
from itertools import chain, combinations
# Create generator set for hypercube of dimension x
def S(x):
 return set(range(0,x)) 
# From http://stackoverflow.com/questions/18035595/powersets-in-python-using-itertools
def powerset(iterable):
 s = list(iterable) 
 return chain.from_iterable(combinations(s, r) for r in range(len(s)+1)) 
# Generate a list of edges given a list of vertices
def edges(verts):
 return set(frozenset([x, y]) for x in verts for y in verts if len(x ^ y) == 1) 
# Run this to generate hypercube of specified number of dimensions
def Qgen(dimensions):
 genSet = S(dimensions) 
 print(genSet) 
 Verts = list(powerset(genSet)) 
 print(Verts) 
 sVerts = set(frozenset(x) for x in Verts) 
 print(sVerts) 
 print(len(sVerts)) 
 Edges = edges(sVerts) 
 print(Edges) 
 print(len(Edges)) 
 return
cube.py

Shiny Graphviz to generate n-dimensional Hamming cubes which falls apart around n=8.

import colorsys
import itertools
import sys
n = int(sys.argv[1])
bits = list(itertools.product([0, 1], repeat=n))
HSV_tuples = [(x*1.0/n, 0.5, 0.5) for x in range(n+1)]
tuples = map(lambda x: '#%02x%02x%02x' % tuple(map(lambda x: int(x*255), colorsys.hsv_to_rgb(*x))), HSV_tuples)
def diff(a, b):
    difference = 0
    for a0, b0 in zip(a, b):
        difference += a0 != b0
    return difference
def reprtag(t):
    return "".join(str(b) for b in t)
print 'graph cube {'
print '  ranksep=', n/2.0, ';'
print '  fontsize=8;'
seen = set()
for ((a, b), c) in zip(itertools.product(bits, bits), itertools.cycle(tuples)):
    if diff(a, b) == 1 and (b, a) not in seen:
        seen.add((a, b))
        print '  "', reprtag(a), '"',  '--', '"', reprtag(b), '" [color="',  c, '"]'
print '}'

Dependency:

sudo apt-get install graphviz

Make them all in one go:

for i in $(seq 1 8); do python cube.py $i > cube${i}.dot;
altcube.py

This program generates text files containing all unique n-cubes satisfying the properties described. This works in an arbitrary number of dimensions and takes a file of the previous dimension as input.

import string
dimension_start = 3
dimension_end = 4
# loop through the dimensions you care about
# (stored in files, so you don't have to start from scratch each time)
for dimension in range( dimension_start, dimension_end + 1 ):
	# initialize lists & dictionaries as empty
	vertices = []
	edges = []
	faces = {}
	# create a list of vertices in string binary form (e.g., '1001' = 9 ).
	for vertex_int in range( 2**dimension ):
		vertices.append( bin( vertex_int )[2:].zfill( dimension ) )
	# create a list of valid edges in string binary form (e.g., '10011011' = 9,11 ).
	for vertex in vertices:
		for i in range( len( vertex ) ):
			if vertex[i:i+1] == '0':
				edges.append( vertex + vertex[:i] + '1' + vertex[i+1:] )
	# create a list of valid edges in string binary form (e.g., '10011011' = 9,11 ).
	# create a list of valid faces as a lookup table of edges.
	for vertex in vertices:
		for i in range( len( vertex ) ):
			if vertex[i:i+1] == '0':
				edges.append( vertex + vertex[:i] + '1' + vertex[i+1:] )
				# create a list of valid faces as a lookup table of edges.
				for j in range( i + 1, len( vertex ) ):
					if not ( vertex + vertex[:i] + '1' + vertex[i+1:] in faces ):
						faces[ vertex + vertex[:i] + '1' + vertex[i+1:] ] = 
							[ vertex + vertex[:i] + '1' + vertex[i+1:], 
							vertex + vertex[:j] + '1' + vertex[j+1:], 
							vertex[:i] + '1' + vertex[i+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:], 
							vertex[:j] + '1' + vertex[j+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] ]
					if not ( vertex + vertex[:j] + '1' + vertex[j+1:] in faces ):
						faces[ vertex + vertex[:j] + '1' + vertex[j+1:] ] = 
							[ vertex + vertex[:i] + '1' + vertex[i+1:], 
							vertex + vertex[:j] + '1' + vertex[j+1:], 
							vertex[:i] + '1' + vertex[i+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:], 
							vertex[:j] + '1' + vertex[j+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] ]
					if not ( vertex[:i] + '1' + vertex[i+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] in faces ):
						faces[ vertex[:i] + '1' + vertex[i+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] ] = 
							[ vertex + vertex[:i] + '1' + vertex[i+1:], 
							vertex + vertex[:j] + '1' + vertex[j+1:], 
							vertex[:i] + '1' + vertex[i+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:], 
							vertex[:j] + '1' + vertex[j+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] ]
					if not ( vertex[:j] + '1' + vertex[j+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] in faces ):
						faces[ vertex[:j] + '1' + vertex[j+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] ] = 
							[ vertex + vertex[:i] + '1' + vertex[i+1:], 
							vertex + vertex[:j] + '1' + vertex[j+1:], 
							vertex[:i] + '1' + vertex[i+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:], 
							vertex[:j] + '1' + vertex[j+1:] + vertex[:i] + '1' + vertex[i+1:j] + '1' + vertex[j+1:] ]
	# read in the n-1 dimension valid cubes
		cube_small_file = open( 'cube-' + str( dimension - 1 ) + '.txt', 'r' )
		cubes_small = []
		for line in cube_small_file:
			cubes_small.append( line.strip( 'n' ) )
		cube_small_file.close()
	# open the n dimension cube file for writing
		cube_big_file = open( 'cube-' + str( dimension ) + '.txt', 'w' )
	# open an error file for writing
		cube_error_file = open( 'cube_error.txt', 'w' )
	# combine every cube with every other cube
	for cube1_string in cubes_small:
		for cube2_string in cubes_small:
			# read in n-1 dimension cubes
			cube1_list = [ '0' + cube1[:dimension-1] + '0' + cube1[dimension-1:] for cube1 in cube1_string.split( ',' ) ]
			cube2_list = [ '1' + cube2[:dimension-1] + '1' + cube2[dimension-1:] for cube2 in cube2_string.split( ',' ) ]
			# combine cubes the two possible ways
			for binary_value in range( 0, 2 ):
				new_edges = []
				new_edges_track = []
				new_edges_face_order = []
				cube_big_edges = {}
				# add the edges of two n-1 dimension cubes to the edge list for the n cube
				for cube1_edge in cube1_list:
					cube_big_edges[ cube1_edge[:-1] ] = cube1_edge[-1:]
				for cube2_edge in cube2_list:
					cube_big_edges[ cube2_edge[:-1] ] = cube2_edge[-1:]
				# create a list of the new edges
				for edge in edges:
					if not ( edge in cube_big_edges ):
						new_edges.append( edge )
				# initialize one new edge (to 0 or 1 depending on the for loop)
				for edge in new_edges:
					cube_big_edges[ edge ] = str( binary_value )
					break
				# order the list of new edges so that only one edge will be unknown at any time
				new_edges_track = list( new_edges )
				for edge in new_edges:
					face = faces[ edge ]
					for face_edge in face:
						if face_edge in new_edges_track:
							new_edges_track.remove( face_edge )
							new_edges_face_order.append( face_edge )
				# go through the edges in that order, filling in new edges each time
				for edge in new_edges_face_order:
					odd_even_total = 0
					face = faces[ edge ]
					for face_edge in face:
						if face_edge in cube_big_edges:
							odd_even_total = odd_even_total + int( cube_big_edges[ face_edge ] )
					for face_edge in face:
						if not ( face_edge in cube_big_edges ):
							cube_big_edges[ face_edge ] = str( ( odd_even_total + 1 ) % 2 )
				# now that all edge values are filled in, doublecheck that every face is odd (quadruple-check actually -- inefficiently -- because I lookup by edges)
				for edge in edges:
					odd_even_total = 0
					face = faces[ edge ]
					for face_edge in face:
						if face_edge in cube_big_edges:
							odd_even_total = odd_even_total + int( cube_big_edges[ face_edge ] )
					if odd_even_total % 2 == 0:
						cube_error_file.write( 'EVEN FACE ERROR: ' )
						for face_edge in face:
							cube_error_file.write( face_edge + ':' + cube_big_edges[ face_edge ] + ',' )
						cube_error_file.write( 'n' )
						cube_error_file.write( 'CUBE1: ' + ','.join( cube1_list ) + 'n' )
						cube_error_file.write( 'CUBE2: ' + ','.join( cube2_list ) + 'n' )
				# output the n-dimensional cube
				cube_big_list = []
				for edge in edges:
					cube_big_list.append( edge + cube_big_edges[ edge ] )
				cube_big_file.write( ','.join( cube_big_list ) + 'n' )
	cube_big_file.close()
square.pl

Coded in desperation to test the conjecture for 3 & 4 dimensions. Conditional programming is cool! helper.scala autogenerated the legal 4cycles.

Dependency: Download the SWI-prolog package

sudo apt-get prolog

This can be run in the prolog console:

swipl -s square.pl

with the following commands.

findall([A,B,C,D,E,F,G,H,I,J,K,L], cube(A,B,C,D,E,F,G,H,I,J,K,L), List), length(List, Len).
findall([E1, E2, E3, E4, E5, E6, E7, E8, E9, E10, E11, E12, E13, E14, E15, E16, E17, E18, E19, E20, E21, E22, E23, E24, E25, E26, E27, E28, E29, E30, E31, E32], dim4(E1, E2, E3, E4, E5, E6, E7, E8, E9, E10, E11, E12, E13, E14, E15, E16, E17, E18, E19, E20, E21, E22, E23, E24, E25, E26, E27, E28, E29, E30, E31, E32), List), length(List, Zen).
findall([E0000e, E0001e, E000e0, E000e1, E0010e, E0011e, E001e0, E001e1, E00e00, E00e01, E00e10, E00e11, E0100e, E0101e, E010e0, E010e1, E0110e, E0111e, E011e0, E011e1, E01e00, E01e01, E01e10, E01e11, E0e000, E0e001, E0e010, E0e011, E0e100, E0e101, E0e110, E0e111, E1000e, E1001e, E100e0, E100e1, E1010e, E1011e, E101e0, E101e1, E10e00, E10e01, E10e10, E10e11, E1100e, E1101e, E110e0, E110e1, E1110e, E1111e, E111e0, E111e1, E11e00, E11e01, E11e10, E11e11, E1e000, E1e001, E1e010, E1e011, E1e100, E1e101, E1e110, E1e111, Ee0000, Ee0001, Ee0010, Ee0011, Ee0100, Ee0101, Ee0110, Ee0111, Ee1000, Ee1001, Ee1010, Ee1011, Ee1100, Ee1101, Ee1110, Ee1111], dim5(E0000e, E0001e, E000e0, E000e1, E0010e, E0011e, E001e0, E001e1, E00e00, E00e01, E00e10, E00e11, E0100e, E0101e, E010e0, E010e1, E0110e, E0111e, E011e0, E011e1, E01e00, E01e01, E01e10, E01e11, E0e000, E0e001, E0e010, E0e011, E0e100, E0e101, E0e110, E0e111, E1000e, E1001e, E100e0, E100e1, E1010e, E1011e, E101e0, E101e1, E10e00, E10e01, E10e10, E10e11, E1100e, E1101e, E110e0, E110e1, E1110e, E1111e, E111e0, E111e1, E11e00, E11e01, E11e10, E11e11, E1e000, E1e001, E1e010, E1e011, E1e100, E1e101, E1e110, E1e111, Ee0000, Ee0001, Ee0010, Ee0011, Ee0100, Ee0101, Ee0110, Ee0111, Ee1000, Ee1001, Ee1010, Ee1011, Ee1100, Ee1101, Ee1110, Ee1111), List), length(List, Zen).
%Edges can be either 0 or 1
color(0).
color(1).
even(N) :-
  0 is N mod 2.
odd(N) :-
  1 is N mod 2.
even(A, B) :-
  0 is A mod 2,
  0 is B mod 2.
evenSum(A, B, C, D) :- 
  Sum = A + B + C + D,
  even(Sum).
oddSum(A, B, C, D) :- 
  Sum = A + B + C + D,
  odd(Sum).
%Check that the sum of the edges of a 4-cycle is odd
square(E1, E2, E3, E4) :- 
    color(E1), color(E2), color(E3), color(E4),  %seperates the statements that must be satisfied
    oddSum(E1, E2, E3, E4).
%coordinates on the edge of the cube is a 3 tuple, (x,y,z) <=> Exyz
%Ee00, x = e, y = 0, z=0 
%where e is a place holder for the slot that is changing, the other 2 slots are fixed
%Prolog is a declaritive language, thus we will generate the legal 4 cycles in our friend, Haskell.
cube(Ee00, E0e0, E00e,
     Ee01, E0e1, E01e,
	 Ee10, E1e0, E10e,
	 Ee11, E1e1, E11e) :- 
	 square(Ee00, E0e0, Ee10, E1e0),
	 square(E00e, E0e0, E01e, E0e1),
	 square(E10e, E11e, E1e0, E1e1),
	 square(Ee00, E00e, E10e, Ee01),
	 square(E01e, Ee10, E11e, Ee11),
	 square(Ee01, Ee11, E0e1, E1e1).
%Count the number of cubes:	
%findall ([12 edges], cube(12 edges), list)
%name variables
name something that does things
%name where the elements go
%length(list name, length of list variable)
dim4(Ee110, E101e, E0e11, E00e1, E11e0, E11e1, E10e1, Ee010, E0e01, Ee100, Ee111, E00e0, E111e, E110e, E0e00, E011e, Ee101, Ee011, E001e, Ee000, E010e, E01e1, E100e, E1e00, E1e11, E000e, Ee001, E0e10, E10e0, E1e01, E1e10, E01e0) :-
	square(E000e, E001e, E00e0, E00e1),
	square(E0e00, E010e, E0e01, E000e),
	square(E01e0, E0e00, E0e10, E00e0),
	square(E100e, Ee001, Ee000, E000e),
	square(Ee000, E00e0, E10e0, Ee010),
	square(E0e00, E1e00, Ee000, Ee100),
	square(E01e0, E010e, E011e, E01e1),
	square(E001e, E0e10, E0e11, E011e),
	square(E0e01, E00e1, E0e11, E01e1),
	square(E101e, Ee011, E001e, Ee010),
	square(Ee001, Ee011, E10e1, E00e1),
	square(E1e01, E0e01, Ee001, Ee101),
	square(E100e, E101e, E10e1, E10e0),
	square(E100e, E1e01, E1e00, E110e),
	square(E1e00, E10e0, E1e10, E11e0),
	square(E010e, Ee101, E110e, Ee100),
	square(E01e0, Ee110, Ee100, E11e0),
	square(Ee110, E0e10, E1e10, Ee010),
	square(E110e, E11e1, E11e0, E111e),
	square(E101e, E1e11, E1e10, E111e),
	square(E1e01, E1e11, E10e1, E11e1),
	square(Ee110, Ee111, E111e, E011e),
	square(Ee101, Ee111, E11e1, E01e1),
	square(Ee011, E1e11, Ee111, E0e11).

The rest of this code is large amounts of vertices, extending this to higher dimensions: square.pl

generate_hypercube.py:: Graph / Set Theoretic Exploration
The (U,V,E) definitions of the graph by power set / Hamming distance edge enumeration is what we’ll focus on.

Here is the summary (delta operator in E is meant to be symmetric set difference):

Screenshot from 2014-07-24 01:08:53

Here was my earlier attempt at stating the method, which has written-out examples of \(Q_2\) and \(Q_3\):

Screenshot from 2014-07-24 01:37:48

altcube.py:: String typed n-dimensional cube (legal 4-cycles)

If you represent the vertices as strings, for a hypercube you have:

0000 = 0
0001 = 1
0010 = 2
0011 = 3
0100 = 4
0101 = 5
0110 = 6
0111 = 7
0000 = 8
0001 = 9
0010 = 10
0011 = 11
0100 = 12
0101 = 13
0110 = 14
0111 = 15
The valid edges change 1 bit. So a face changes 2 bits.

One face would have vertices:
1001 = 9
1011 = 11
1101 = 13
1111 = 15

The edges are the ones that change 1 bit:
10011011 = 9,11
10011101 = 9,13
10111111 = 11,15
11011111 = 13,15

You can represent an edge with its odd/even by adding a bit:

110111111 = 13,15 is 1
110111110 = 13,15 is 0

So I can represent a n-cube as a string of all edge values.

To create a higher dimension I would just copy the cubes each twice to get the valid cubes, adding 0 and 1 to all of their vertices respectively.

So that one edge could be:

01101011111 = 13,15 is 1
and
11101111111 = 29,31 is 1

The new edges are all 1 bit off as before.

square.pl:: Method of Computationally Generating Legal 4-cycles

00 01 10 11
insert e as a placeholder, (ex:e00, 0e0, 00e) original[:pos] + ins + original[pos:] Pick all possible combinations of 4 vertices
from vertices choose 4
Find legal 4 cycles:
check that n slots are the same for each element of the array
In the case of 3 dimensions, n=1

Example of a legal 4 cycle: (Ee00, E0e0, Ee10, E1e0)

We know it is legal because doing a columnwise AND

e00
0e0
e10
1e0
001 = 1 slot in common

Filtered Clifford Supermodules

What is Geometric Algebra?

The geometric algebra \(\mathbb{G}^n\) is an extension of the inner product space \(\mathbb{R}^n\). Each vector in \(\mathbb{R}^n\) is an associative algebra with a multivector in \(\mathbb{G}^n\), that is, it is a vector space that satisfied the following properties for all scalars \(a\) and \(A, B, C in \mathbb{G}^n\):

0. \(A(B+C) = AB + AC\), \((B + C)A = BA + CA\)

1. \((aA)B = A(aB) = a(AB)\)

2. \((AB)C = A(BC)\)

3. \(1A = A1 = A\)

4. The geometric product of \(\mathbb{G}^n\) is linked to the algebraic structure of \(\mathbb{R}^n\) by \(uu = u \cdot u = |u|^2 \forall u \in \mathbb{R}^n\)

5. Every orthonormal basis for \(\mathbb{R}^n\) determines a canonical basis for the vector space \(\mathbb{G}^n\).

Geometric algebra represents geometric operations on these objects with algebraic operations in \(\mathbb{G}^n\)

Coordinates are not used in these representations

What is a Clifford Algebra?

Clifford algebras as commutative superalgebras with odd and even parts generated by the odd and even degree monomials, respectively.

If you like, you can think of the real 3-D space as your vector space V with its regular inner product, and now imagine it sitting inside an algebra \(Cl(V)\). (I am omitting a lot of details in the interest of simplicity.) Since \(V\) is three dimensional, so you expect \(Cl(V)\) to have at least that many dimensions, but it turns out it will be \(2^3\) dimensional (and \(2^n\) dimensional for an \(n\) dimensional \(V\).)

Clearly there are many more elements in \(Cl(V)\) than just those in \(V\). The question is: do they have a useful interpretation? The answer is “yes”, because of the way things are set up. It turns out that subspaces of V can be represented as products of elements of \(V\) in \(Cl(V)\). Because multiplication is defined everywhere, you can now “multiply things in \(V\) with subspaces of \(V\)”.

How do we go between superalgebras and graphs?

We could call a ranking of a bipartite graph \(A\) a map \(h: V(A) \rightarrow \mathbb{Z}_2\) that gives \(A\) the additional structure of a ranked poset on \(A\) via \(h\) as the rank function.

If you have said ranked poset and rank function \(h\), then we can identify \(A\) as the Hasse diagram of the ranked poset.

Proof from Yan Zhang’s thesis

Screenshot from 2014-07-21 23:19:22

Instead of choosing 4 element sets as is outlined below in the Graph/Set theoretic approach, Yan defined an |E| element binary string (sort of like a bit mask) for each cycle with a 1 in each position where the corresponding edge belongs to that cycle (It’s easy to think about or draw in 2 / 3 / 4 dimensions). The become the rows of a matrix. You can define the coloring of edges as an |E| element 0/1 column vector and it is then possible to make the leap to say that the product of the matrix and the vectors will be a row vector of ones in the case of a successful odd dashing. Playing with that should make it clear that the rank of that matrix defines the dimensionality of the cycle space, which is equivalent to a number of similar linear algebra formulations.

Additional Insights

1

Recursive Generation of Faces and Edges

Credit: Chris Walker

\(v_n = 2^n\)
\(e_n = 2e_{n-1}+v_{n-1}\)
\(f_n = 2f_{n-1}+e_{n-1}\)

A Promising Method & Proven Bijection

\(\mathbb{Z}^E_2 \rightarrow \mathbb{Z}^F_2/\mathbb{Z}^1_1\)

In the case of n = 3, we have edges and 6 faces

\(12 \rightarrow 6+1\)

\(2^{edges – faces}\)

(Rank Nullity Theorem)

The preimage of a point under a linear map is always the size of the kernel, thus there are the same number of even and odd legal colorings, and the difference between 2 odd cubes is always even.

Alex’s Thoughts

A null connection and its inverse give a higher order null connection, where null connection means none of the edges connecting sub-cubes are value 1:

A null connection and its inverse give a higher order null connection
Credit: Alex Ray

Clean enumeration such that each unique \(2^n-1\) length binary string straightforwardly generates a unique solution.

Credit: Alex Ray

10566357_10202419629317565_1642026497_n

Conclusion

You know those problems which you know you shouldn’t be thinking about but keep bugging you so you occasionally work on them in desperation and then forcibly shove them to the backburner?

This is one of those problems. We’ve conquered it.

Manipulation of Living Cells & Dead Ones

There are many methods for manipulating living cells and pieces of dead ones. The method of choice is entirely context dependent.

Thanks to Paul for sending me this lovely diagram!
Thanks to Paul for sending me this lovely diagram!

Optical tweezers are highly concentrated laser beams which are used to attract and repel microscale and nanoscale objects in order to precisely move them. This method basically uses high powered photons to punch the cell around \(\rightarrow\) manipulate its momentum.

Electrophoresis works on things that are charged. ABEL is a form of electrophoresis which requires active feedback. Electrophoresis implements the principle of electro-osmotic flow: flowing a mixed solution through porous material (gel electrophoresis) or capillaries (for capillary electrophoresis) to separate and purify the contents according to their electrophoretic mobility (a function of their molecular weight and charge). I won’t go into sequencing techniques in this post.

Note: For analyzing DNA or proteins with gel or capillary electrophoresis, you likely lysed your cell [killed it brutally] to extract the object of interest.

 Image Source: Gel Electrophoresis
Image Source: Agarose Gel Electrophoresis
Image Source: Polyacrylamide Gel Electrophoresis
Image Source: Polyacrylamide Gel Electrophoresis

PAGE is a vertical electrophoretic technique used to uniformly move relatively small objects (lower molecular weight DNA, proteins., etc.). PAGE results in “smeared” bands compared to that resultant from agarose gel (due to the smaller pore size of polyacrylamide).

Image Source: PAGE Setup
Image Source: PAGE Setup

Cells have a non-uniform charge distribution. The solution holding the cells for observation often forms a counter-ion cloud which can shield and neutralize the charge of the cell. To avoid such complications, we use dielectrophoresis: an electrophoretic technique that works for all dielectric materials.

Dielectrophoresis

There are a variety of electrode geometries. As a general principle: the geometries are less complex for things on the order of cell size, (easy to trap), and more complex for smaller objects.

Chemical or Microfluidic Immobilization

Acoustic Trapping with Microfluidics

Electrorotation

1.3454129.online.f6_thmb
Image Source

Other options include flowing cells through confocal microscopes, or magnetically manipulating cells using magnelles. Anything that generates a force reliably can be used as a trapping device!

Keep in mind that trapping the specimen is only half of the battle. The consequent imaging techniques are far more beautiful and complex.

What is Bra-ket Notation?

Bra-ket notation is concise and useful.

A wavefunction is represented by a ket \(|\psi\rangle\).

The complex conjugate of wave function is written as a bra \(\langle\psi|\).

The complex conjugate of a variable is found by swapping the sign of the imaginary part of said variable’s complex number, in other words: reflecting z across the real axis. For example,
\\(z = x + iy\\)
\\(z^* = x – iy\\)

A bra on the left and a ket on the right implies integration over dt.
\\(\langle\psi|\psi\rangle \equiv \int\psi^*\psi dt\\)

Similarly
\\(\langle\psi|\hat{X}|\psi\rangle \equiv \int\psi^*\hat{X}\psi dt\\)

My brief tutorial covered the basic usages of bra-ket notation in a quantum mechanical context; bra-ket notation is also used elsewhere.

Simple Cubic Lattice

Today, let’s have some fun playing with perspective rendering in Python! My graphics package of choice is VPython:

sudo apt-get install python-visual

Suppose we want to represent a simple cubic lattice: Using the visual package, we can create a collection of spheres at positions \((i,j,k)\) with \(i,j,k = -L…L\).

from visual import sphere
L = 5
R = 0.3
for i in range(-L,L+1):
 for j in range(-L,L+1):
  for k in range(-L,L+1):
   sphere(pos=[i,j,k],radius=R)

Resulting in this beautiful rendering:

But what if we want to change the color? Simply alter the last line:

sphere(pos=[i,j,k],radius=R).color = color.blue

Less atoms? Alter the value of L.

L = 1

Same number of atoms with smaller radii? Alter the value of R.

R = 0.2

Want more advanced computational physics with Python?

Coupled Oscillator Love

Disclaimer: This informal post assumes you are familiar with basic photonics and applied chaos theory.

Unfortunately, “chaos” has become a buzz word. Instead of using chaos as an umbrella term, I will use chaos to mean chaotic parameter driving (because that’s the cool stuff)! There is synchronized chaos.

Lasers are nonlinear oscillators – their behavior can be periodic or chaotic. You can sync two lasers operating in a chaotic state!

Coupled_oscillators
Source

If we place two lasers such that their beams are parallel and separated by a small distance \(dx\), the beams themselves will not overlap. For a small enough \(dx\), their electric fields do! This direct physical coupling of the two lasers is enough to induce sync!

You can also sync a bunch of different electronic circuits with linear error feedback coupling. Chua’s circuit is a freaking sweet combination of a chaotic electronic circuit and an optoelectronic chaotic oscillator. You can couple Chua’s circuit with a plasma discharge tube (Chua’s circuit produces the HV DC current require to power/sync with the discharge tube). Even though the systems are physically different they couple anyway! And that, my friends, is a wonderful demonstration of the beauty of math.

Unschärferelation

Konzepte der Quantenmechanik durch den dunklen Nebel der Unwissenheit aufgefressen. Lass mich den Nebel eines verbreiteten Missverständnisses aufheben. Lass mich deine gegenwärtige Meinung ändern.

Angenommen haben wir einen Straßentunnel. Und woher sollen wir wissen, dass ein Auto in dem Tunnel ist? Klar können wir das, ohne es anzuschauen. Wir bestimmen “experimentell”, ob sich ein Auto im Tunnel befindet, oder nicht.

Wir schaffen ein autonomes Roboterauto (und addieren die Zündschlossüberbrückung). Wir lassen das Auto den Tunnel geradeaus durchfahren und hören, ob es einen Zusammenstoß gibt.

Wenn wir einen Zusammenstoß hören, dann wissen wir, dass ein Auto in dem Tunnel ist. Das Ende des Tunnels ist nicht in Sicht! Aber, Messung verändert den Zustand (oder Form?) des Autos.

Das ist ein makroskopisches Beispiel der Unschärferelation.

Wir messen eine Eigenschaft der Elektronen, indem wir diese mit einem Partikel von ähnlicher Größe (Photon) ändern. Dies verändert andere Eigenschaften der Elektronen.

Die Elektronen sind tot. Die Partikel “weis” überhaupt nicht, was hier vor sich geht. Du weist genauso gut wie ich, dass Partikel keine fühlenden Wesen sind.

Carolin Zöbelein, danke für die Korrektur meiner Sprachfehler.

A Simple Look at Nanowire Assemblies in Optics

Below is a quick attempt to summarize and extrapolate from an article on Optical Routing And Sensing With Nanowire Assemblies in a simple manner, without assuming the reader is deeply familiar with the esoteric lexicon of photonics and optoelectronics. I can’t freely distribute the article which inspired this post; check with your local university to find a copy. More information on the relevant article is post script.

Disclaimer: I am a relative n00b in the material science-y side of photonics; I’ve previously played in physics at a higher level of abstraction. I attempted to keep the oversimplification to a minimum in my “simple” discussion below.

Sidenote: I decided to include the definitions in-line rather than using footnotes. I found the use of footnotes to organize definitions very similar to spaghetti code – which I prefer to avoid.

Enough dabbling, onto the physics!

What are nanowires and nanoribbons?

A nanowire is any wire with a diameter on the nanoscale [order of 10^-9 meters], and nanoribbons are structures which filter for different wavelengths of light [colors] depending on their chemical composition and structure. These ribbons act as short-pass filters; that is, they will only allow the smaller wavelengths of any beam that is not monochromatic.

I recently came across an article exploring of the combined use of nanowires and nanoribbons. Nanowire assemblies could potentially replace/supplement lithographically [created by transferring a pattern by selective exposure to light] defined structures currently used in most integrated photonic circuits: circuits which transmit laser beams compared to electronic integrated circuits which transmit electrons.

The main advantage of using nanowire assemblies is that they stand without assistance from their substrate’s structure and they are flexible. This allows them to be manipulated on surfaces and to be used as mobile probes in fluids. This versatility contrasts with the permanent location of lithographically defined structures with respect to their substrates.

The team of physicists who published this paper experimented with the assemblies in both dry and liquid mediums. In the dry medium, they synthesized Tin-dioxide nanoribbons and manipulated the ribbons and wires with a probe under a dark-field microscope. They used both a HeCd laser [a laser which emits blue light] for continuous unfiltered light [constant stream of most colors of light] and a YAG laser [a laser which emits ultraviolet light] for pumped pulsing; laser pumping is a technique used here to create or amplify laser beams. They ran the laser beams through various nanowire assemblies [photonic circuit configurations] and measured the transmission efficiency by comparing the output intensity of the lasers to the input intensity.


Nanowire assemblies have unpredictable geometries; this introduces uncertainty into the functionality of their assemblies. Additionally, the precise geometry definable in lithography allows for less interwire coupling losses.


Photonic & Electronic Integrated Circuits

Let’s look at an alternative to lithographically (i.e., created by transferring a pattern by selective exposure to light) defined structures: optical integrated circuits.

Specifically, we’ll examine the usage of single-crystalline nanoribbons, useful because of their lack of edge effects. These edge effects, usually due to grain boundaries (defects in the crystal structures) cause significant loss of efficiency in the transmission of optical signals. Using far field microscopy and spectroscopy, the waveguiding behavior of individual nanoribbons can be observed in detail. The size of any given nanoribbon can be inferred from the color of it’s guided photoluminescence; large ribbons are white (less filtering), while smaller ribbons are blue (short-pass filtering). Waveguides are used in optics to transmit light and signals for long distances and with a high signal rate. The team [0] experimented with light injection into a ribbon cavity. Optical cavities are arrangements of mirrors that provide a type of filter for injected light. Light confined in an optical cavity will reflect multiple times and interfere with itself to filter for desired frequencies by eliminating undesirable frequencies by means of destructive interference. Various coupling geometries are usually tested using near-field optical microscopy (NSOM). For example, the quantification of the transmission loss of long straight ribbons using NSOM results in the finding that these single-crystalline ribbons had substrate-induced radiation loss and in some cases, imperfect single-crystalline leading to minor scattering crystal steps. However, this percentage of loss in the light’s amplitude is acceptable for use in sub-micrometer light-transmission (used in integrated planar photonic applications).

Nanowire light sources can be successfully coupled to ribbon waveguides to create input and output for future photonic devices. Most research into developing nanowire photonic circuitry explores various nanoribbon bondings, various ribbon geometries and especially focuses on the efficacy and efficiency of this alternative to electrical integrated circuits in the hope of proving that photonic integrated circuits have the potential to supplement and possibly overtake today’s electronically based circuitry.

Practical devices will require a combination of electrically driven nanowire light sources and nanoribbons: the next generation of integrated circuits will likely be a combination of parallel electrical and optical circuit elements. The main problem to overcome is developing better ways to integrate optical and electrical circuit elements for the parallel processing of electron and photon based signal transmission.

Sources Cited

0] Sirbuly, D. J. (2005). Optical Routing And Sensing With Nanowire Assemblies. Proceedings of the National Academy of Sciences, 102(22), 7800-7805.
1] Law, M. (2004). Nanoribbon Waveguides For Subwavelength Photonics Integration. Science, 305(5688), 1269-1273.