Fiber Bundles of Formal Disks

Here is an incomplete proof that varieties are fiber bundles of formal disks over their deRham Stacks. The fact makes intuitive sense, the deRham stack is the variety without infinitesimal data, and then by adding the infinitesimal data (formal disks) back in, you recover the result. However, the fact that you can build anything non infinitesimal out of formal disks fills me with confusion and awe.

Acknowledgements: This is the result of a working group with Dan Fletcher, Adam Holeman, and me as part of the Northwestern Homotopy Working Seminar (started by Matthew Weatherly, Grigory Kondyrev, and me). The working session in which Adam and I figured out the proof, Dan was not there, which is why his name is not mentioned, but he was very helpful in understanding the claim. This proof is the result of Yaroslav Khromenkov coming up with the idea of it, and Adam and I understanding and correcting his solution during a working session.

If the pdf viewer doesn’t work, here is a link to the paper.

Automorphisms of the Jacobian

Read our arxiv paper:

Here, \(A\) is any abelian variety. This post consists of the backstory of this paper, and something interesting I learned about the relationship between the size of \(Aut(A)\), and the number of principal polarizations \(A\) has.


In the summer of 2017, I wanted to compute a period matrix of a particular genus 3 curve, and I found a wonderful team of low dimensional geometers: Dami Lee and Matthias Weber. After they helped me, I read Dami’s work out of curiosity, and found her calculations of automorphism groups of curves fascinating due to their geometric simplicity. Further, it seemed that her methods of using a tesselation to compute or visualize and automorphism group would generalize naturally to higher dimensions. So, we set to work trying to figure that out. This led us to generalize her prior work on automorphism groups of curves (with equal weight weierstrass points) to a broader class of curves (with nonequal weight weierstrass points).

I was interested in her lovely descriptions of the automorphism groups and their actions as I was trying to use her work to model the action of a subgroup of the Morava Stabilizer group acting on Lubin Tate space. Turns out I will likely have to do this separately, because so much information changes about the automorphism group when you base change your curve from \(\mathbb{C}\) to \(\mathbb{F}_p\), and her methods only work of \(\mathbb{C}\) (relying on properties of Weierstrass points and so on).

In the interim, I separately looked into other ways to compute automorphisms of curves and their Jacobians, and ran into the work of Magma and Sage contributors Edgar Costa, John Voight, Nicolas Mascot, and Jeroen Sijsling. They had a program and paper to compute endomorphism groups of Jacobians, and I wanted to calculate automorphism groups (because, in the end, this is all for modeling subgroups of the Morava Stabilizer group, the automorphism group of a height n formal group law in char p). Together, we wrote a program that calculates the automorphism group of a Jacobian, given its period matrix. Several glitches later, and we realized that we were finding several different automorphism groups for the same Jacobian (as a period matrix), because there were several different automorphism groups — for the different principal polarizations. So, I found the “correct one” by putting in the information of the original plane curve as well. But this detour led us to find several interesting different principal polarizations….

Automorphism Groups and Narasimhan-Nori

…It also led me to learn about the following magic relationship between the size of an automorphism group of an abelian variety \(A\) and the number of its principle polarizations (this is from a paper of Lange: Abelian varieties with several principal polarizations).

I still do not understand the computability of the order of the set of interest \(\Pi(A)\) (the number of principal polarizations up to iso of \(A\)) according to Theorem 1.5:
Some notation:
Fix a principal polarization \(L_0: A \to \hat{A}\) ,
Given a map \(r \in Aut(A)\), let \(\hat{r}\) be the dual map in \(Aut(\hat{A})\)
Let \(’\) indicate the Rosati involution wrt \(L_0\), that is \(r’ := L_0^{-1} \circ r^ \circ L_0\)

Let us first look at the set of \(r \in Aut(A)\) such that the following two conditions are met:

(1) \(r’ = r\) (i.e., \(r\) is preserved under the Rosati involution wrt \(L_0\))
(2) the zeros of the minimal polynomial of \(r\) (wrt the rational embedding) are all positive.

Side comment:
(1) \(\Leftrightarrow\) \((r^g) = g!\) [lemma 1.2] (2) \(\Leftrightarrow\) \((L_0^{g-i} \cdot r^i) > 0\) [lemma 1.3] These conditions give us that \(r\) is a principal polarization by lemma 1.1

Once we have this set, call it \(U(A)\), let \(Aut(A)\) act on it:

\(Aut(A) \times U(A) \to U(A)\)
\((g, a) \mapsto g’ a g\)

where \(’\) indicates the rosati involution wrt \(L_0\).

We mod out the set \(U(A)\) by the above action of \(Aut(A)\), call this set \(\Pi(A)\).

And there we are!

CAMEL paper

I used Braille as a test language, but this is a framework to automate the decoding of any partially understood (ancient) language by creating probabilistic dictionaries.

I began working on CAMEL (Contextual Machine Learning Through the Analysis and Chunking of Partially Translated Grade 2 Braille) when I saw this atrocity: the Braille below translates to “STAIRWELL”
Most sighted people don’t know Braille (to my dismay- the grammar is beautiful), this gave me the idea to write an optical Braille reader app for the sighted: the user holds their Android camera up to a sign and it automatically translates the Braille into English.

As I sat down to code this, I realized that I’d have to hard code a dictionary of Grade 2 Braille (a grammatically complex language). I have a deep disgust for hard coding \(\Rightarrow\) CAMEL is the program I wrote to automate the creation of a Grade 2 Braille dictionary. All code used in this project is on github.

This program is based on contextual machine learning, so I named my project CAMEL (ContextuAl MachinE Learning). The title of my paper is a mouthful, because I’m unsure of how to shorten it while maintaining clarity: CAMEL