General Modular Symbols

In this chapter we explain how to generalize the notion of modular symbols given in Chapter Modular Forms of Weight 2 to higher weight and more general level. We define Hecke operators on them and their relation to modular forms via the integration pairing. We omit many difficult proofs that modular symbols have certain properties and instead focus on how to compute with modular symbols. For more details see the references given in this section (especially [Mer94]) and [Wie05].

Modular symbols are a formalism that make it elementary to compute with homology or cohomology related to certain Kuga-Sato varieties (these are \cE\times_X \cdots \times_X \cE, where X is a modular curve and \cE is the universal elliptic curve over it). It is not necessary to know anything about these Kuga-Sato varieties in order to compute with modular symbols.

This chapter is about spaces of modular symbols and how to compute with them. It is by far the most important chapter in this book. The algorithms that build on the theory in this chapter are central to all the computations we will do later in the book.

This chapter closely follows Lo”{i}c Merel’s paper [Mer94]. First we define modular symbols of weight k\geq 2. Then we define the corresponding Manin symbols and state a theorem of Merel-Shokurov, which gives all relations between Manin symbols. (The proof of the Merel-Shokurov theorem is beyond the scope of this book but is presented nicely in [Wie05].) Next we describe how the Hecke operators act on both modular and Manin symbols and how to compute trace and inclusion maps between spaces of modular symbols of different levels.

Not only are modular symbols useful for computation, but they have been used to prove theoretical results about modular forms. For example, certain technical calculations with modular symbols are used in Lo”{i}c Merel’s proof of the uniform boundedness conjecture for torsion points on elliptic curves over number fields (modular symbols are used to understand linear independence of Hecke operators). Another example is [Gri05], which distills hypotheses about Kato’s Euler system in K_2 of modular curves to a simple formula involving modular symbols (when the hypotheses are satisfied, one obtains a lower bound on the Shafarevich-Tate group of an elliptic curve).

Modular Symbols

We recall from Chapter Modular Forms of Weight 2 the free abelian group \sM_2 of modular symbols. We view these as elements of the relative homology of the extended upper half plane \h^* = \h\union \P^1(\Q) relative to the cusps. The group \sM_2 is the free abelian group on symbols \{\alpha,\beta\} with

\alpha, \beta \in \P^1(\Q) = \Q \union \{\infty\}

modulo the relations

\{\alpha,\beta\} + \{\beta,\gamma\} + \{\gamma,\alpha\} = 0,

for all \alpha,\beta,\gamma\in\P^1(\Q), and all torsion. More precisely,

\sM_2 =
(F/R)/(F/R)_{\tor},

where F is the free abelian group on all pairs (\alpha,\beta) and R is the subgroup generated by all elements of the form (\alpha,\beta) + (\beta,\gamma) + (\gamma,\alpha). Note that \sM_2 is a huge free abelian group of countable rank.

For any integer n\geq 0, let \Z[X,Y]_n be the abelian group of homogeneous polynomials of degree n in two variables X,Y.

Remark 1.22

Note that \Z[X,Y]_n is isomorphic to \Sym^{n}(\Z\cross \Z) as a group, but certain natural actions are different. In [Mer94], Merel uses the notation \Z_{n}[X,Y] for what we denote by \Z[X,Y]_{n}.

Now fix an integer k\geq 2. Set

\sM_k = \Z[X,Y]_{k-2} \tensor_\Z \sM_2,

which is a torsion-free abelian group whose elements are sums of expressions of the form X^{i}Y^{k-2-i}\tensor \{\alpha,\beta\}. For example,

X^3 \tensor \{0,1/2\} -17 XY^2 \tensor \{\infty,1/7\} \in \sM_{5}.

Fix a finite index subgroup G of \SL_2(\Z). Define a left action of G on \Z[X,Y]_{k-2} as follows. If g=\abcd{a}{b}{c}{d}\in G and P(X,Y)\in \Z[X,Y]_{k-2}, let

(g P)(X,Y) = P(dX-bY, -cX+aY).

Note that if we think of z=(X,Y) as a column vector, then

(g P)(z) = P(g^{-1}z),

since g^{-1} = \abcd{\hfill{}d}{-b}{-c}{\hfill{}a}. The reason for the inverse is so that this is a left action instead of a right action, e.g., if g,h\in G, then

((gh)P)(z) = P((gh)^{-1}z) = P(h^{-1}g^{-1}z) =
(hP)(g^{-1}z) = (g(hP))(z).

Recall that we let G act on the left on \sM_2 by

g\{\alpha, \beta\} = \{g(\alpha), g(\beta)\},

where G acts via linear fractional transformations, so if g=\abcd{a}{b}{c}{d}, then

g(\alpha)=\frac{a\alpha + b}{c\alpha + d}.

For example, useful special cases to remember are that if g=\abcd{a}{b}{c}{d}, then

g(0) = \frac{b}{d} \qquad\text{and}\qquad g(\infty) = \frac{a}{c}.

(Here we view \infty as 1/0 in order to describe the action.)

We now combine these two actions to obtain a left action of G on \sM_{k}, which is given by

g(P \tensor \{\alpha, \beta\}) = (gP) \tensor\{g(\alpha), g(\beta)\}.

For example,

\mtwo{\hfill 1}{\hfill 2}{-2}{-3} (X^3 \tensor \{0,1/2\}) &=
(-3X - 2Y)^3 \tensor\left\{-\frac{2}{3},  -\frac{5}{8}\right\} \\
&\hspace{-4em}= (-27X^3 - 54X^2Y - 36XY^2 - 8Y^3)\tensor \left\{-\frac{2}{3},  -\frac{5}{8}\right\}.

We will often write P(X,Y)\{\alpha,\beta\} for P(X,Y)\tensor\{\alpha,\beta\}.

Definition 1.23

Let k\geq 2 be an integer and let G be a finite index subgroup of \SL_2(\Z). The space \sM_k(G) of weight k modular symbols for G is the quotient of \sM_k by all relations gx - x for x \in \sM_k, g\in G, and by any torsion.

Note that \sM_k is a torsion-free abelian group, and it is a nontrivial fact that \sM_k has finite rank. We denote modular symbols for G in exactly the same way we denote elements of \sM_k; the group G will be clear from context.

The space of modular symbols over a ring R is

\sM_k(G;R) = \sM_k(G)\tensor_\Z R.

Manin Symbols

Let G be a finite index subgroup of \SL_2(\Z) and k\geq 2 an integer. Just as in Chapter Modular Forms of Weight 2 it is possible to compute \sM_k(G) using a computer, despite that, as defined above, \sM_k(G) is the quotient of one infinitely generated abelian group by another one. This section is about Manin symbols, which are a distinguished subset of \sM_k(G) that lead to a finite presentation for \sM_k(G). Formulas written in terms of Manin symbols are frequently much easier to compute using a computer than formulas in terms of modular symbols.

Suppose P\in \Z[X,Y]_{k-2} and g\in \SL_2(\Z). Then the Manin symbol associated to this pair of elements is

[P,g] = g(P\{0,\infty\}) \in \sM_k(G).

Notice that if Gg = Gh, then [P,g] = [P,h], since the symbol g(P\{0,\infty\}) is invariant by the action of G on the left (by definition, since it is a modular symbol for G). Thus for a right coset Gg it makes sense to write [P, Gg] for the symbol [P,h] for any h\in Gg. Since G has finite index in \SL_2(\Z), the abelian group generated by Manin symbols is of finite rank, generated by

\bigl\{ [X^{k-2-i}Y^i, \, Gg_j] \, : \, i = 0,\ldots, k-2 \quad\text{and}\quad j = 0,\ldots, r \bigr\},

where g_0, \ldots, g_r run through representatives for the right cosets G\backslash \SL_2(\Z).

We next show that every modular symbol can be written as a \Z-linear combination of Manin symbols, so they generate \sM_k(G).

Proposition 1.24

The Manin symbols generate \sM_k(G).

Proof

The proof if very similar to that of Proposition 3.11 except we introduce an extra twist to deal with the polynomial part. Suppose that we are given a modular symbol P\{\alpha,\beta\} and wish to represent it as a sum of Manin symbols. Because

P\{a/b,c/d\} = P\{a/b,0\}+P\{0,c/d\},

it suffices to write P\{0,a/b\} in terms of Manin symbols. Let

0=\frac{p_{-2}}{q_{-2}} = \frac{0}{1},\,\,
\frac{p_{-1}}{q_{-1}}=\frac{1}{0},\,\,
\frac{p_0}{1}=\frac{p_0}{q_0},\,\,
\frac{p_1}{q_1},\,\,
\frac{p_2}{q_2},\,\ldots,\,\frac{p_r}{q_r}=\frac{a}{b}

denote the continued fraction convergents of the rational number a/b. Then

p_j q_{j-1}
- p_{j-1} q_j = (-1)^{j-1}\qquad \text{for }-1\leq j\leq r.

If we let g_j = \mtwo{(-1)^{j-1}p_j}{p_{j-1}}{(-1)^{j-1}q_j}{q_{j-1}}, then g_j\in\sltwoz and

P\{0,a/b\}
&=P\sum_{j=-1}^{r}\left\{\frac{p_{j-1}}{q_{j-1}},\frac{p_j}{q_j}\right\}\\
&=\sum_{j=-1}^{r} g_j((g_j^{-1}P)\{0,\infty\})\\
&=\sum_{j=-1}^{r} [g_j^{-1}P,\,\,g_j].

Since g_j\in\sltwoz and P has integer coefficients, the polynomial g_j^{-1}P also has integer coefficients, so we introduce no denominators.

Now that we know the Manin symbols generate \sM_k(G), we next consider the relations between Manin symbols. Fortunately, the answer is fairly simple (though the proof is not). Let

\sigma = \mtwo{0}{-1}{1}{\hfill 0},\qquad
\tau = \mtwo{0}{-1}{1}{-1}, \qquad
J = \mtwo{-1}{\hfill 0}{\hfill 0}{-1}.

Define a right action of \SL_2(\Z) on Manin symbols as follows. If h \in \SL_2(\Z), let

[P, g]h = [h^{-1} P, gh].

This is a right action because both P\mapsto h^{-1}P and g\mapsto gh are right actions.

Theorem 1.25

If x is a Manin symbol, then

x + x\sigma &= 0,\\
x + x\tau + x\tau^{2} &= 0,\\
x - xJ &= 0.

Moreover, these are all the relations between Manin symbols, in the sense that the space \sM_k(G) of modular symbols is isomorphic to the quotient of the free abelian group on the finitely many symbols [X^{i}Y^{k-2-i},Gg] (for i=0,\ldots, k-2 and Gg \in G\backslash \SL_2(\Z)) by the above relations and any torsion.

Proof

First we prove that the Manin symbols satisfy the above relations. We follow Merel’s proof (see [Mer94, Section 1.2]). Note that

\sigma(0) = \sigma^2(\infty) = \infty
\qquad\text{and}\qquad
\tau(1) = \tau^2(0) = \infty.

Writing x=[P,g], we have

[P,g] + [P,g]\sigma &=
[P,g]  + [\sigma^{-1}P, g\sigma]\\
&= g(P\{0,\infty\}) + g\sigma (\sigma^{-1}P \{0,\infty\})\\
&=
(gP) \{ g(0), g(\infty) \} +
(g\sigma)(\sigma^{-1} P)\{ g\sigma(0), g\sigma(\infty)\} \\
&=   (gP) \{ g(0), g(\infty) \} +
(gP)\{ g(\infty), g(0)\} \\
&= (gP)\{ g(0), g(\infty) \} + \{ g(\infty), g(0)\})\\
&= 0.

Also,

[P,g] &+ [P,g]\tau + [P,g]\tau^2 =
[P,g]  + [\tau^{-1}P, g\tau] + [\tau^{-2}P, g\tau^2]\\
&=
g(P\{0,\infty\}) +
g\tau(\tau^{-1}P \{0,\infty\})
+ g\tau^2(\tau^{-2}P \{0,\infty\}) \\
&= (gP) \{ g(0), g(\infty) \} +
(gP) \{ g \tau(0),g\tau(\infty)\}
+ (gP) \{ g\tau^2(0),\tau^2(\infty)\} \\
&= (gP) \{ g(0), g(\infty) \} +
(gP) \{ g(1),g(0)\})
+ (gP) \{ g(\infty), g(1)\} \\
&= (gP)\bigl(\{ g(0), g(\infty)\} +
\{ g(\infty), g(1)\} + \{ g(1),g(0)\}\bigr) \\
&= 0.

Finally,

[P,g] + [P,g]J &=
g(P\{0,\infty\}) - gJ(J^{-1} P \{gJ(0), gJ(\infty)\}\\
&= (gP)\{g(0),g(\infty)\} - (gP) \{g(0),g(\infty)\}\\
&= 0,

where we use that J acts trivially via linear fractional transformations. This proves that the listed relations are all satisfied.

That the listed relations are all relations is more difficult to prove. One approach is to show (as in [Mer94, Section 1.3]) that the quotient of Manin symbols by the above relations and torsion is isomorphic to a space of Shokurov symbols, which is in turn isomorphic to \sM_k(G). A much better approach is to apply some results from group cohomology, as in [Wie05].

If G is a finite index subgroup and we have an algorithm to enumerate the right cosets G\backslash \SL_2(\Z) and to decide which coset an arbitrary element of \SL_2(\Z) belongs to, then Theorem 1.25 and the algorithms of Chapter Linear Algebra yield an algorithm to compute \sM_k(G;\Q). Note that if J\in G, then the relation x-xJ=0 is automatic.

Remark 1.26

The matrices \sigma and \tau do not commute, so in computing \sM_k(G;\Q), one cannot first quotient out by the two-term \sigma relations, then quotient out only the remaining free generators by the \tau relations, and get the right answer in general.

Coset Representatives and Manin Symbols

The following is analogous to Proposition 3.10:

Proposition 1.27

The right cosets \Gamma_1(N)\backslash \SL_2(\Z) are in bijection with pairs (c,d) where c,d\in\Z/N\Z and \gcd(c,d,N)=1. The coset containing a matrix \abcd{a}{b}{c}{d} corresponds to (c,d).

Proof

This proof is copied from [Cre92, pg. 203], except in that paper Cremona works with the analogue of \Gamma_1(N) in \PSL_2(\Z), so his result is slightly different. Suppose \gamma_i = \abcd{a_i}{b_i}{c_i}{d_i} \in \SL_2(\Z), for i=1,2. We have

\gamma_1 \gamma_2^{-1} =
\mtwo{a_1}{b_1}{c_1}{d_1}
\mtwo{\hfill{}d_2}{-b_2}{-c_2}{\hfill{}a_2}
=\mtwo{a_1 d_2 - b_1 c_2}{*}{c_1 d_2 - d_1 c_2}{a_2 d_1 - b_2 c_1},

which is in \Gamma_1(N) if and only if

(1)c_1 d_2 - d_1 c_2 \con 0 \pmod{N}

and

(2)a_2 d_1 - b_2 c_1 \con a_1 d_2 - b_1 c_2 \con 1\pmod{N}.

Since the \gamma_i have determinant 1, if (c_1,d_1)=(c_2,d_2)\pmod{N}, then the congruences (1)(2) hold. Conversely, if (1)(2) hold, then

c_2 &\con a_2 d_1 c_2 - b_2  c_1 c_2 \\
&\con a_2 d_2 c_1 - b_2 c_2 c_1 \quad\text{ since }d_1 c_2 \con d_2 c_1 \pmod{N}\\
&\con c_1 \qquad\qquad\qquad\quad\,\text{since }a_2 d_2 - b_2 c_2 = 1,

and likewise

d_2 \con a_2 d_1 d_2 - b_2 c_1 d_2
\con a_2 d_1 d_2 - b_2 d_1 c_2
\con d_1\pmod{N}.

Thus we may view weight k Manin symbols for \Gamma_1(N) as triples of integers (i,c,d), where 0\leq i\leq k-2 and c, d\in\Z/N\Z with \gcd(c,d,N)=1. Here (i,c,d) corresponds to the Manin symbol [X^iY^{k-2-i}, \abcd{a}{b}{c'}{d'}], where c' and d' are congruent to c,d \pmod{N}. The relations of Theorem 1.25 become

(i,\,c,d) + (-1)^i (k-2-i, \,d, -c) &= 0,\\
(i,\,c,d) \quad + \quad\,        (-1)^{k-2} \sum_{j=0}^{k-2-i}  (-1)^{j} \binom{k-2-i}{j} (j,\,d,-c-d)&\\
\qquad\qquad + \quad
(-1)^{k-2-i} \sum_{j=0}^i (-1)^j \binom{i}{j} (k-2-i+j,\,-c-d,c)
&=0,\\
(i,\,c,d) - (-1)^{k-2}(i,\,-c,-d) &= 0.

Recall that Proposition 3.10 gives a similar description for \Gamma_0(N).

Modular Symbols with Character

Suppose G=\Gamma_1(N). Define an action of diamond-bracket operators \dbd{d} o, with \gcd(d,N)=1 on Manin symbols as follows:

\langle d \rangle ([P,(u,v)]) = \left[P,(du, dv)\right].

Let

\eps : (\Z/N\Z)^* \to \Q(\zeta)^*

be a Dirichlet character, where \zeta is an n^{th} root of unity and n is the order of \eps. Let \sM_k(N,\eps) be the quotient of \sM_k(\Gamma_1(N);\Z[\zeta]) by the relations (given in terms of Manin symbols)

\dbd{d}x - \eps(d)x = 0,

for all x\in \sM_k(\Gamma_1(N);\Z[\zeta]), d\in (\Z/N\Z)^*, and by any \Z-torsion. Thus \sM_k(N,\eps) is a \Z[\eps]-module that has no torsion when viewed as a \Z-module.

Hecke Operators

Suppose \Gamma is a subgroup of \SL_2(\Z) of level N that contains \Gamma_1(N). Just as for modular forms, there is a commutative Hecke algebra \T=\Z[T_1,T_2, \ldots], which is the subring of \End(\sM_k(\Gamma)) generated by all Hecke operators. Let

R_p = \left\{\mtwo{1}{r}{0}{p} : r = 0,1,\ldots, p-1 \right\}
\union \left\{ \mtwo{p}{0}{0}{1}\right\},

where we omit \abcd{p}{0}{0}{1} if p\mid N. Then the Hecke operator T_p on \sM_k(\Gamma) is given by

T_p(x) = \sum_{g \in R_p} gx.

Notice when p\nmid N that T_p is defined by summing over p+1 matrices that correspond to the p+1 subgroups of \Z\times \Z of index p. This is exactly how we defined T_p on modular forms in Definition Definition 2.26.

General Definition of Hecke Operators

Let \Gamma be a finite index subgroup of \SL_2(\Z) and suppose

\Delta\subset \GL_2(\Q)

is a set such that \Gamma\Delta = \Delta\Gamma = \Delta and \Gamma \backslash \Delta is finite. For example, \Delta=\Gamma satisfies this condition. Also, if \Gamma=\Gamma_1(N), then for any positive integer n, the set

\Delta_n = \left\{
\!\mtwo{a}{b}{c}{d}\in \Mat_2(\Z) \,:\, ad-bc=n, \quad
\mtwo{a}{b}{c}{d}\con \mtwo{1}{*}{0}{n}\!\!\!\!\!\!\pmod{N}
\right\}

also satisfies this condition, as we will now prove.

Lemma 1.28

We have

\Gamma_1(N)\cdot \Delta_n = \Delta_n \cdot \Gamma_1(N) = \Delta_n

and

\Delta_n = \bigcup_{a,b} \Gamma_1(N) \cdot \sigma_a \mtwo{a}{b}{0}{n/a},

where \sigma_a \con \abcd{\hfill 1/a}{\,\, 0}{0}{ \hfill a}\pmod{N}, the union is disjoint and 1\leq a \leq n with a\mid n, \gcd(a,N)=1, and 0\leq b < n/a. In particular, the set of cosets \Gamma_1(N)\backslash \Delta_n is finite.

Proof

(This is Lemma 1 of [Mer94, Section 2.3].) If \gamma\in\Gamma_1(N) and \delta\in\Delta_n, then

\mtwo{1}{*}{0}{1} \cdot \mtwo{1}{*}{0}{n} \con
\mtwo{1}{*}{0}{n} \cdot \mtwo{1}{*}{0}{1}
\con \mtwo{1}{*}{0}{n}\pmod{N}.

Thus \Gamma_1(N)\Delta_n \subset \Delta_n, and since \Gamma_1(N) is a group, \Gamma_1(N)\Delta_n = \Delta_n; likewise \Delta_n\Gamma_1(N) = \Delta_n.

For the coset decomposition, we first prove the statement for N=1, i.e., for \Gamma_1(N)=\SL_2(\Z). If A is an arbitrary element of \Mat_2(\Z) with determinant n, then using row operators on the left with determinant 1, i.e., left multiplication by elements of \SL_2(\Z), we can transform A into the form \abcd{a}{b}{0}{n/a}, with 1\leq a\leq n and 0\leq b < n. (Just imagine applying the Euclidean algorithm to the two entries in the first column of A. Then a is the \gcd of the two entries in the first column, and the lower left entry is 0. Next subtract n/a from b until 0\leq b < n/a.)

Next suppose N is arbitrary. Let g_1, \ldots, g_r be such that

g_1\Gamma_1(N) \union \cdots \union g_r \Gamma_1(N) = \SL_2(\Z)

is a disjoint union. If A \in \Delta_n is arbitrary, then as we showed above, there is some \gamma\in\SL_2(\Z), so that \gamma\cdot A = \abcd{a}{b}{0}{n/a}, with 1\leq a \leq n and 0\leq b < n/a, and a\mid n. Write \gamma = g_i \cdot \alpha, with \alpha\in\Gamma_1(N). Then

\alpha\cdot A = g_i^{-1} \cdot \mtwo{a}{b}{0}{n/a} \con \mtwo{1}{*}{0}{n}\pmod{N}.

It follows that

g_i^{-1} \con \mtwo{1}{*}{0}{n} \cdot \mtwo{a}{b}{0}{n/a}^{-1}
\con \mtwo{1/a}{*}{0}{a}\pmod{N}.

Since \abcd{1}{1}{0}{1}\in \Gamma_1(N) and \gcd(a,N)=1, there is \gamma'\in\Gamma_1(N) such that

\gamma' g_i^{-1} \con \mtwo{1/a}{0}{0}{a}\pmod{N}.

We may then choose \sigma_a = \gamma' g_i^{-1}. Thus every A\in \Delta_n is of the form \gamma \sigma_a \abcd{a}{b}{0}{n/a}, with \gamma\in\Gamma_1(N) and a,b suitably bounded. This proves the second claim.

Let any element \delta=\abcd{a}{b}{c}{d}\in\GL_2(\Q) act on the left on modular symbols \M_k\tensor\Q by

\delta(P\{\alpha,\beta\}) = P(dX-bY,-cX+aY) \{\delta(\alpha),\delta(\beta)\}.

(Until now we had only defined an action of \SL_2(\Z) on modular symbols.) For g=\abcd{a}{b}{c}{d}\in \GL_2(\Q), let

(3)\tilde{g} = \mtwo{\hfill{}d}{-b}{-c}{\hfill{}a}
= \det(g)\cdot g^{-1}.

Note that \tilde{\tilde{g}} = g. Also, \delta P(X,Y)=(P\circ \tilde{g})(X,Y), where we set

\tilde{g}(X,Y) = (dX-bY,-cX+aY).

Suppose \Gamma and \Delta are as above. Fix a finite set R of representatives for \Gamma\backslash \Delta. Let

T_{\Delta} : \M_k(\Gamma) \to \M_k(\Gamma)

be the linear map

T_{\Delta}(x) = \sum_{\delta\in R} \delta x.

This map is well defined because if \gamma\in\Gamma and x\in \M_k(\Gamma), then

\sum_{\delta\in R} \delta \gamma  x =
\sum_{\text{certain $\delta'$}} \gamma \delta' x
= \sum_{\text{certain $\delta'$}} \delta' x
=\sum_{\delta\in R} \delta x,

where we have used that \Delta\Gamma = \Gamma\Delta, and \Gamma acts trivially on \M_k(\Gamma).

Let \Gamma=\Gamma_1(N) and \Delta=\Delta_n. Then the n^{th} Hecke operator T_n is T_{\Delta_n}, and by Lemma 1.28,

T_n(x) = \sum_{a,b} \sigma_a \mtwo{a}{b}{0}{n/a} \cdot x,

where a,b are as in Lemma 1.28.

Given this definition, we can compute the Hecke operators on M_k(\Gamma_1(N)) as follows. Write x as a modular symbol P\{\alpha,\beta\}, compute T_n(x) as a modular symbol, and then convert to Manin symbols using continued fractions expansions. This is extremely inefficient; fortunately Lo”{i}c Merel (following [Maz73]) found a much better way, which we now describe (see [Mer94]).

Hecke Operators on Manin Symbols

If S is a subset of \GL_2(\Q), let

\tilde{S} = \{\tilde{g} : g \in  S\},

where \tilde{g} is as in (3). Also, for any ring R and any subset S\subset \Mat_2(\Z), let R[S] denote the free R-module with basis the elements of S, so the elements of R[S] are the finite R-linear combinations of the elements of S.

One of the main theorems of [Mer94] is that for any \Gamma,\Delta satisfying the condition at the beginning of Section General Definition of Hecke Operators, if we can find \sum u_M M \in
\C[\Mat_2(\Z)] and a map

\phi: \tilde{\Delta} \SL_2(\Z) \to \SL_2(\Z)

that satisfies certain conditions, then for any Manin symbol [P,g] \in \M_k(\Gamma), we have

T_\Delta([P,g]) =
\sum_{gM\in \tilde{\Delta}\SL_2(\Z)\text{ with }M\in\SL_2(\Z)} u_M
[\tilde{M}\cdot P,\,\, \phi(gM)].

The paper [Mer94] contains many examples of \phi and \sum u_M M \in
\C[\Mat_2(\Z)] that satisfy all of the conditions.

When \Gamma=\Gamma_1(N), the complicated list of conditions becomes simpler. Let \Mat_2(\Z)_n be the set of 2\times 2 matrices with determinant n. An element

h = \sum u_M [M] \in \C[\Mat_2(\Z)_n]

satisfies condition C_n if for every K\in \Mat_2(\Z)_n / \SL_2(\Z), we have that

(4)\sum_{M \in K} u_M([M\infty] - [M 0 ]) = [\infty] - [0]
\in \C[P^1(\Q)].

If h satisfies condition C_n, then for any Manin symbol [P,g] \in M_k(\Gamma_1(N)), Merel proves that

(5)T_n([P,(u,v)]) = \sum_{M} u_M [P(aX+bY, \, cX+dY), (u,v)M].

Here (u,v)\in (\Z/N\Z)^2 corresponds via Proposition 1.27 to a coset of \Gamma_1(N) in \SL_2(\Z), and if (u',v')=(u,v)M \in (\Z/N\Z)^2 and \gcd(u',v',N)\neq 1, then we omit the corresponding summand.

For example, we will now check directly that the element

h_2 =  \left[\mtwo{2}{0}{0}{1}\right] +
\left[\mtwo{1}{0}{0}{2}\right] +
\left[\mtwo{2}{1}{0}{1}\right] +
\left[\mtwo{1}{0}{1}{2}\right]

satisfies condition C_2. We have, as in the proof of Lemma 1.28 (but using elementary column operations), that

\Mat_2(\Z)_2/\SL_2(\Z) &= \left\{
\mtwo{a}{0}{b}{2/a} \SL_2(\Z) : a=1,2\text{ and }0\leq b < 2/a
\right\} \\
&\hspace{-2ex}= \left\{ \mtwo{1}{0}{0}{2} \SL_2(\Z),\,\,\,\,
\mtwo{1}{0}{1}{2} \SL_2(\Z),\,\,\,\,
\mtwo{2}{0}{0}{1} \SL_2(\Z) \right\}.

To verify condition C_2, we consider in turn each of the three elements of \Mat_2(\Z)_2/\SL_2(\Z) and check that (4) holds. We have that

\mtwo{1}{0}{0}{2} \in \mtwo{1}{0}{0}{2} \SL_2(\Z),

\mtwo{2}{1}{0}{1}, \mtwo{1}{0}{1}{2} \in \mtwo{1}{0}{1}{2} \SL_2(\Z),

and

\mtwo{2}{0}{0}{1}\in \mtwo{2}{0}{0}{1} \SL_2(\Z).

Thus if K = \abcd{1}{0}{0}{2}\SL_2(\Z), the left sum of (4) is [\abcd{1}{0}{0}{2}(\infty)] - [\abcd{1}{0}{0}{2}(0)]  = [\infty]-[0], as required. If K=\abcd{1}{0}{1}{2} \SL_2(\Z), then the left side of (4) is

[\abcd{2}{1}{0}{1}(\infty)] &- [\abcd{2}{1}{0}{1}(0)]
+ [\abcd{1}{0}{1}{2}(\infty)] - [\abcd{1}{0}{1}{2}(0)] \\
&= [\infty] - [1] + [1] - [0] = [\infty] - [0].

Finally, for K = \abcd{2}{0}{0}{1}\SL_2(\Z) we also have [\abcd{2}{0}{0}{1}(\infty)] - [\abcd{2}{0}{0}{1}(0)]  = [\infty]-[0], as required. Thus by (5) we can compute T_2 on any Manin symbol, by summing over the action of the four matrices \abcd{2}{0}{0}{1}, \abcd{1}{0}{0}{2}, \abcd{2}{1}{0}{1}, \abcd{1}{0}{1}{2}.

Proposition 1.29

The element

\sum_{\substack{a>b\geq 0\\d>c\geq 0\\ad-bc=n}}
\left[\mtwo{a}{b}{c}{d}\right] \in \Z[\Mat_2(\Z)_n]

satisfies condition C_n.

Merel’s two-page proof of Proposition 1.29 is fairly elementary.

Remark 1.30

In [Cre97a, Section 2.4], Cremona discusses the work of Merel and Mazur on Heilbronn matrices in the special cases \Gamma=\Gamma_0(N) and weight 2. He gives a simple proof that the action of T_p on Manin symbols can be computed by summing the action of some set R_p of matrices of determinant p. He then describes the set R_p and gives an efficient continued fractions algorithm for computing it (but he does not prove that his R_p satisfy Merel’s hypothesis).

Remarks on Complexity

Merel gives another family \mathcal{S}_n of matrices that satisfy condition C_n, and he proves that as n\to \infty,

\#\mathcal{S}_n \sim \frac{12 \log(2)}{\pi^2} \cdot
\sigma_1(n)\log(n),

where \sigma_1(n) is the sum of the divisors of n. Thus for a fixed space \sM_k(\Gamma) of modular symbols, one can compute T_n using O(\sigma_1(n)\log(n)) arithmetic operations. Note that we have fixed \sM_k(\Gamma), so we ignore the linear algebra involved in computation of a presentation; also, adding elements takes a bounded number of field operations when the space is fixed. Thus, using Manin symbols the complexity of computing T_p, for p prime, is O((p+1)\log(p)) field operations, which is exponential in the number of digits of p.

Basmaji’s Trick

There is a trick of Basmaji (see [Bas96]) for computing a matrix of T_n on \sM_k(\Gamma), when n is very large, and it is more efficient than one might naively expect. Basmaji’s trick does not improve the big-oh complexity for a fixed space, but it does improve the complexity by a constant factor of the dimension of \sM_k(\Gamma;\Q). Suppose we are interested in computing the matrix for T_n for some massive integer n and that \sM_k(\Gamma;\Q) has large dimension. The trick is as follows. Choose a list

x_1 =[P_1,g_1],\ldots,x_r = [P_r,g_r] \in V = \sM_k(\Gamma;\Q)

of Manin symbols such that the map \Psi:\T \to V^r given by

t \mapsto (t x_1, \ldots, t x_r)

is injective. In practice, it is often possible to do this with r “very small”. Also, we emphasize that V^r is a \Q-vector space of dimension r\cdot \dim(V).

Next find Hecke operators T_i, with i small, whose images form a basis for the image of \Psi. Now with the above data precomputed, which only required working with Hecke operators T_i for small i, we are ready to compute T_n with n huge. Compute y_i = T_n(x_i), for each i=1,\ldots, r, which we can compute using Heilbronn matrices since each x_i=[P_i,g_i] is a Manin symbol. We thus obtain \Psi(T_n) \in V^r. Since we have precomputed Hecke operators T_j such that \Psi(T_j) generate V^r, we can find a_j such that \sum a_j \Psi(T_j) = \Psi(T_n). Then since \Psi is injective, we have T_n = \sum a_j T_j, which gives the full matrix of T_n on M_k(\Gamma;\Q).

Cuspidal Modular Symbols

Let \sB be the free abelian group on symbols \{\alpha\}, for \alpha\in\P^1(\Q), and set

\sB_k = \Z[X,Y]_{k-2}\tensor\sB.

Define a left action of \SL_2(\Z) on \sB_k by

g  (P \{\alpha\}) = (gP) \{g(\alpha)\},

for g\in \SL_2(\Z). For any finite index subgroup \Gamma\subset \SL_2(\Z), let \sB_k(\Gamma) be the quotient of \sB_k by the relations x- gx for all g\in \Gamma and by any torsion. Thus \sB_k(\Gamma) is a torsion-free abelian group.

The boundary map is the map

b:\sM_k(\Gamma) \to \sB_k(\Gamma)

given by extending the map

b(P\{\alpha,\beta\}) = P\{\beta\} - P\{\alpha\}

linearly. The space \sS_k(\Gamma) of cuspidal modular symbols is the kernel

\sS_k(\Gamma) = \ker(\sM_k(\Gamma) \to \sB_k(\Gamma)),

so we have an exact sequence

0 \to \sS_k(\Gamma) \to \sM_k(\Gamma) \to \sB_k(\Gamma).

One can prove that when k>2, this sequence is exact on the right.

Next we give a presentation of \sB_k(\Gamma) in terms of “boundary Manin symbols”.

Boundary Manin Symbols

We give an explicit description of the boundary map in terms of Manin symbols for \Gamma=\Gamma_1(N), then describe an efficient way to compute the boundary map.

Let \cR be the equivalence relation on \Gamma\backslash\Q^2 given by

[\Gamma\smallvtwo{\lambda u}{\lambda v}] \sim
\sign(\lambda)^k[\Gamma\smallvtwo{u}{v}],

for any \lambda\in\Q^*. Denote by B_k(\Gamma) the finite-dimensional \Q-vector space with basis the equivalence classes (\Gamma\backslash\Q^2)/\cR. The following two propositions are proved in [Mer94].

Proposition 1.31

The map

\mu:\sB_k(\Gamma)\ra B_k(\Gamma),
\qquad P\left\{\frac{u}{v}\right\}\mapsto
P(u,v)\left[\Gamma\vtwo{u}{v}\right]

is well defined and injective. Here u and v are assumed coprime.

Thus the kernel of \delta:\sS_k(\Gamma)\ra \sB_k(\Gamma) is the same as the kernel of \mu\circ \delta.

Proposition 1.32

Let P\in V_{k-2} and g=\abcd{a}{b}{c}{d}\in\sltwoz. We have

\mu\circ\delta([P,(c,d)])
= P(1,0)[\Gamma\smallvtwo{a}{c}]
-P(0,1)[\Gamma\smallvtwo{b}{d}].

We next describe how to explicitly compute \mu\circ \delta:\sM_k(N,\eps)\ra B_k(N,\eps) by generalizing the algorithm of [Cre97a, Section 2.2]. To compute the image of [P,(c,d)], with g=\abcd{a}{b}{c}{d}\in\sltwoz, we must compute the class of [\smallvtwo{a}{c}] and of [\smallvtwo{b}{d}]. Instead of finding a canonical form for cusps, we use a quick test for equivalence modulo scalars. In the following algorithm, by the i^{th} symbol we mean the i^{th} basis vector for a basis of B_k(N,\eps). This basis is constructed as the algorithm is called successively. We first give the algorithm, and then prove the facts used by the algorithm in testing equivalence.

Algorithm 1.33

Given a boundary Manin symbol [\smallvtwo{u}{v}], this algorithm outputs an integer i and a scalar \alp such that [\smallvtwo{u}{v}] is equivalent to \alp times the i^{th} symbol found so far. (We call this algorithm repeatedly and maintain a running list of cusps seen so far.)

  1. Use Proposition 3.21 to check whether or not [\smallvtwo{u}{v}] is equivalent, modulo scalars, to any cusp found. If so, return the representative, its index, and the scalar. If not, record \smallvtwo{u}{v} in the representative list.
  2. Using Proposition 1.37, check whether or not [\smallvtwo{u}{v}] is forced to equal 0 by the relations. If it does not equal 0, return its position in the list and the scalar 1. If it equals 0, return the scalar 0 and the position 1; keep \smallvtwo{u}{v} in the list, and record that it is equivalent to 0.

The case considered in Cremona’s book [Cre97a] only involve the trivial character, so no cusp classes are forced to vanish. Cremona gives the following two criteria for equivalence.

Proposition 1.34

Consider \smallvtwo{u_i}{v_i}, i=1,2, with u_i,v_i integers such that \gcd(u_i,v_i)=1 for each i.

  1. There exists g\in\Gamma_0(N) such that g\smallvtwo{u_1}{v_1}=\smallvtwo{u_2}{v_2} if and only if

    s_1 v_2 \con s_2 v_1 \pmod{\gcd(v_1 v_2,N)},\,
\text{ where $s_j$ satisfies $u_j s_j\con 1\!\!\!\pmod{v_j}$}.

  2. There exists g\in\Gamma_1(N) such that g\smallvtwo{u_1}{v_1}=\smallvtwo{u_2}{v_2} if and only if

    v_2 \con v_1 \pmod{N}\text{ and } u_2 \con u_1
\pmod{\gcd(v_1,N)}.

Proof

The first statement is [Cre97a, Prop. 2.2.3], and the second is [Cre92, Lem. 3.2].

Algorithm 1.35

Suppose \smallvtwo{u_1}{v_1} and \smallvtwo{u_2}{v_2} are equivalent modulo \Gamma_0(N). This algorithm computes a matrix g\in\Gamma_0(N) such that g\smallvtwo{u_1}{v_1}=\smallvtwo{u_2}{v_2}.

  1. Let s_1, s_2, r_1, r_2 be solutions to s_1 u_1 -r_1 v_1 =1 and s_2 u_2 -r_2 v_2 =1.
  2. Find integers x_0 and y_0 such that x_0v_1v_2+y_0N=1.
  3. Let x=-x_0(s_1v_2-s_2v_1)/(v_1v_2,N) and s_1' = s_1 + xv_1.
  4. Output g=\abcd{u_2}{r_2}{v_2}{s_2}
\cdot \abcd{u_1}{r_1}{v_1}{s_1'}^{-1}, which sends \smallvtwo{u_1}{v_1} to \smallvtwo{u_2}{v_2}.

Proof

See the proof of [Cre97a, Prop. 8.13].

The \eps relations can make the situation more complicated, since it is possible that \eps(\alp)\neq \eps(\beta) but

\eps(\alp)\left[\vtwo{u}{v}\right] =\left[\gamma_\alp \vtwo{u}{v}\right]=
\left[\gamma_\beta \vtwo{u}{v}\right]=\eps(\beta)\left[\vtwo{u}{v}\right].

One way out of this difficulty is to construct the cusp classes for \Gamma_1(N), and then quotient out by the additional \eps relations using Gaussian elimination. This is far too inefficient to be useful in practice because the number of \Gamma_1(N) cusp classes can be unreasonably large. Instead, we give a quick test to determine whether or not a cusp vanishes modulo the \eps-relations.

Lemma 1.36

Suppose \alp and \alp' are integers such that \gcd(\alp,\alp',N)=1. Then there exist integers \beta and \beta', congruent to \alp and \alp' modulo N, respectively, such that \gcd(\beta,\beta')=1.

Proof

By Exercise 1.12 the map \SL_2(\Z)\ra\SL_2(\Z/N\Z) is surjective. By the Euclidean algorithm, there exist integers x, y and z such that x\alp + y\alp' + zN = 1. Consider the matrix \abcd{y}{-x}{\alp}{\hfill\alp'}\in \SL_2(\Z/N\Z) and take \beta, \beta' to be the bottom row of a lift of this matrix to \SL_2(\Z).

Proposition 1.37

Let N be a positive integer and \eps a Dirichlet character of modulus N. Suppose \smallvtwo{u}{v} is a cusp with u and v coprime. Then \smallvtwo{u}{v} vanishes modulo the relations

\left[\gamma\smallvtwo{u}{v}\right]=
\eps(\gamma)\left[\smallvtwo{u}{v}\right],\qquad
\text{all $\gamma\in\Gamma_0(N)$},

if and only if there exists \alp\in(\Z/N\Z)^*, with \eps(\alp)\neq 1, such that

v &\con \alp v \pmod{N},\\
u &\con \alp u \pmod{\gcd(v,N)}.

Proof

First suppose such an \alp exists. By Lemma 1.36 there exists \beta, \beta'\in\Z lifting \alp,\alp^{-1} such that \gcd(\beta,\beta')=1. The cusp \smallvtwo{\beta u}{\beta' v} has coprime coordinates so, by Proposition 1.34 and our congruence conditions on \alp, the cusps \smallvtwo{\beta{}u}{\beta'{}v} and \smallvtwo{u}{v} are equivalent by an element of \Gamma_1(N). This implies that \left[\smallvtwo{\beta{}u}{\beta'{}v}\right]
=\left[\smallvtwo{u}{v}\right]. Since \left[\smallvtwo{\beta{}u}{\beta'{}v}\right]
= \eps(\alp)\left[\smallvtwo{u}{v}\right] and \eps(\alp)\neq 1, we have \left[\smallvtwo{u}{v}\right]=0.

Conversely, suppose \left[\smallvtwo{u}{v}\right]=0. Because all relations are two-term relations and the \Gamma_1(N)-relations identify \Gamma_1(N)-orbits, there must exists \alp and \beta with

\left[\gamma_\alp \vtwo{u}{v}\right]
=\left[\gamma_\beta \vtwo{u}{v}\right]
\qquad\text{ and }\eps(\alp)\ne \eps(\beta).

Indeed, if this did not occur, then we could mod out by the \eps relations by writing each \left[\gamma_\alp \smallvtwo{u}{v} \right] in terms of \left[\smallvtwo{u}{v}\right], and there would be no further relations left to kill \left[\smallvtwo{u}{v}\right]. Next observe that

\left[\gamma_{\beta^{-1}\alp}
\vtwo{u}{v}\right]
&= \left[\gamma_{\beta^{-1}}\gamma_\alp
\vtwo{u}{v}\right]\\
&= \eps(\beta^{-1})\left[\gamma_\alp
\vtwo{u}{v}\right]
= \eps(\beta^{-1})\left[\gamma_\beta
\vtwo{u}{v}\right]
= \left[\vtwo{u}{v}\right].

Applying Proposition 1.34 and noting that \eps(\beta^{-1}\alp)\neq 1 shows that \beta^{-1}\alp satisfies the properties of the ” \alp ” in the statement of the proposition.

We enumerate the possible \alp appearing in Proposition 1.37 as follows. Let g=(v,N) and list the \alp=v\cdot\frac{N}{g}\cdot{}a+1, for a=0,\ldots,g-1, such that \eps(\alp)\neq 0.

Pairing Modular Symbols and Modular Forms

In this section we define a pairing between modular symbols and modular forms that the Hecke operators respect. We also define an involution on modular symbols and study its relationship with the pairing. This pairing is crucial in much that follows, because it gives rise to period maps from modular symbols to certain complex vector spaces.

Fix an integer weight k\geq 2 and a finite index subgroup \Gamma of \SL_2(\Z). Let M_k(\Gamma) denote the space of holomorphic modular forms of weight k for \Gamma, and let S_k(\Gamma) denote its cuspidal subspace. Following [Mer94, Section 1.5], let

(6)\Sbar_k(\Gamma) = \{\overline{f} : f \in S_k(\Gamma)\}

denote the space of antiholomorphic cusp forms. Here \overline{f} is the function on \h^* given by \overline{f}(z) = \overline{f(z)}.

Define a pairing

(7)(S_k(\Gamma) \oplus \Sbar_k(\Gamma))
\times \sM_k(\Gamma) \to \C

by letting

\langle (f_1, f_2), P\{\alpha,\beta\} \rangle
= \int_{\alpha}^{\beta} f_1(z) P(z,1)\dz +
\int_{\alpha}^{\beta} f_2(z) P(\zbar,1) \dzbar

and extending linearly. Here the integral is a complex path integral along a simple path from \alpha to \beta in \h (so, e.g., write z(t)=x(t) + iy(t), where (x(t),y(t)) traces out the path and consider two real integrals).

Proposition 1.38

The integration pairing is well defined, i.e., if we replace P\{\alpha,\beta\} by an equivalent modular symbol (equivalent modulo the left action of \Gamma), then the integral is the same.

Proof

This follows from the change of variables formulas for integration and the fact that f_1 \in S_k(\Gamma) and f_2 \in \Sbar_k(\Gamma). For example, if k=2, g\in \Gamma and f\in S_k(\Gamma), then

\langle f, g\{\alpha,\beta\}\rangle
&= \langle f, \{g(\alpha),g(\beta)\}\rangle \\
&= \int_{g(\alpha)}^{g(\beta)} f(z)dz \\
&= \int_{\alpha}^{\beta} f(g(z))d g(z) \\
&= \int_{\alpha}^{\beta} f(z)dz  =
\langle f, \{\alpha, \beta\}\rangle,

where f(g(z))d g(z) = f(z) dz because f is a weight 2 modular form. For the case of arbitrary weight k>2, see Exercise 1.14.

The integration pairing is very relevant to the study of special values of L-functions. The L-function of a cusp form f=\sum a_n q^n \in S_k(\Gamma_1(N)) is

(8)L(f,s) &= (2\pi)^{s}\Gamma(s)^{-1} \int_{0}^{\infty} f(i t)t^s \frac{dt}{t}\\
&=\sum_{n=1}^{\infty} \frac{a_n}{n^s} \qquad \text{ for }\Re(s)\gg 0.

The equality of the integral and the Dirichlet series follows by switching the order of summation and integration, which is justified using standard estimates on |a_n| (see, e.g., [Kna92, Section VIII.5]).

For each integer j with 1 \leq j \leq k-1, we have, setting s=j and making the change of variables t\mapsto -it in (8), that

(9)L(f,j) = \frac{(-2\pi i)^{j}}{(j-1)!}
\cdot \left\langle f,\,\,
X^{j-1}Y^{k-2-(j-1)} \{0,\infty\} \right\rangle.

The integers j as above are called critical integers. When f is an eigenform, they have deep conjectural significance (see [BK90, Sch90]). One can approximate L(f,j) to any desired precision by computing the above pairing explicitly using the method described in Chapter Computing Periods. Alternatively, [Dok04] contains methods for computing special values of very general L-functions, which can be used for approximating L(f,s) for arbitrary s, in addition to just the critical integers 1,\ldots,k-1.

Theorem 1.39

The pairing

\langle \cdot\, , \, \cdot \rangle: (S_k(\Gamma) \oplus \Sbar_k(\Gamma))
\times \sS_k(\Gamma,\C) \to \C

is a nondegenerate pairing of complex vector spaces.

Proof

This is [Sho80b, Thm. 0.2] and [Mer94, Section 1.5].

Corollary 1.40

We have

\dim_\C\sS_k(\Gamma,\C) = 2\dim_\C S_2(\Gamma).

The pairing is also compatible with Hecke operators. Before proving this, we define an action of Hecke operators on M_k(\Gamma_1(N)) and on \Sbar_k(\Gamma_1(N)). The definition is similar to the one we gave in Section Hecke Operators for modular forms of level 1. For a positive integer n, let R_n be a set of coset representatives for \Gamma_1(N)\backslash \Delta_n from Lemma 1.28. For any \gamma=\abcd{a}{b}{c}{d} \in \GL_2(\Q) and f\in M_k(\Gamma_1(N)) set

f^{[\gamma]_k} = \det(\gamma)^{k-1} (cz+d)^{-k} f(\gamma(z)).

Also, for f\in \Sbar_k(\Gamma_1(N)), set

f^{[\gamma]_k'} = \det(\gamma)^{k-1} (c\zbar+d)^{-k} f(\gamma(z)).

Then for f\in M_k(\Gamma_1(N)),

T_n(f) = \sum_{\gamma\in R_n} f^{[\gamma]_k}

and for f\in \Sbar_k(\Gamma_1(N)),

T_n(f) = \sum_{\gamma\in R_n} f^{[\gamma]_k'}.

This agrees with the definition from Section Hecke Operators when N=1.

Remark 1.41

If \Gamma is an arbitrary finite index subgroup of \SL_2(\Z), then we can define operators T_\Delta on M_k(\Gamma) for any \Delta with \Delta\Gamma=\Gamma\Delta = \Delta and \Gamma\backslash \Delta finite. For concreteness we do not do the general case here or in the theorem below, but the proof is exactly the same (see [Mer94, Section 1.5]).

Finally we prove the promised Hecke compatibility of the pairing. This proof should convince you that the definition of modular symbols is sensible, in that they are natural objects to integrate against modular forms.

Theorem 1.42

If

f=(f_1,f_2)\in S_k(\Gamma_1(N)) \oplus \Sbar_k(\Gamma_1(N))

and x \in \sM_k(\Gamma_1(N)), then for any n,

\langle T_n(f), x \rangle = \langle f, T_n(x) \rangle.

Proof

We follow [Mer94, Section 2.1] (but with more details) and will only prove the theorem when f=f_1\in S_k(\Gamma_1(N)), the proof in the general case being the same.

Let \alpha, \beta \in \P^1(\Q), P\in \Z[X,Y]_{k-2}, and for g=\abcd{a}{b}{c}{d}\in\GL_2(\Q), set

j(g,z) = cz+d.

Let n be any positive integer, and let R_n be a set of coset representatives for \Gamma_1(N)\backslash \Delta_n from Lemma 1.28.

We have

\langle T_n(f), P\{\alpha,\beta\}\rangle &=
\int_{\alpha}^{\beta} T_n(f) P(z,1) \dz\\
&= \sum_{\delta\in R} \int_{\alpha}^{\beta}
\det(\delta)^{k-1} f(\delta(z)) j(\delta,z)^{-k} P(z,1) \dz.

Now for each summand corresponding to the \delta\in R, make the change of variables u=\delta z. Thus we make \# R change of variables. Also, we will use the notation

\tilde{g} = \mtwo{\hfill{}d}{-b}{-c}{\hfill{}a}
= \det(g)\cdot g^{-1}

for g\in\GL_2(\Q). We have

\langle T_n(f), P\{\alpha,\beta\}\rangle &=\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\sum_{\delta \in R}\int_{\delta(\alpha)}^{\delta(\beta)}
\det(\delta)^{k-1} f(u) j(\delta, \delta^{-1}(u))^{-k}
P(\delta^{-1}(u), 1) d(\delta^{-1}(u)).

We have \delta^{-1}(u) = \tilde{\delta}(u), since a linear fractional transformation is unchanged by a nonzero rescaling of a matrix that induces it. Thus by the quotient rule, using that \tilde{\delta} has determinant \det(\delta), we see that

d(\delta^{-1}(u)) = d(\tilde{\delta}(u))
= \frac{\det(\delta) du}{j(\tilde{\delta},u)^2}.

We next show that

(10)j(\delta, \delta^{-1}(u))^{-k} P(\delta^{-1}(u),1)
=
j(\tilde{\delta},u)^k \det(\delta)^{-k} P(\tilde{\delta}(u),1).

From the definitions, and again using that \delta^{-1}(u)=\tilde{\delta}(u), we see that

j(\delta, \delta^{-1}(u)) = \frac{\det(\delta)}{j(\tilde{\delta}, u)},

which proves that (10) holds. Thus

\left\langle T_n(f), P\{\alpha,\beta\}\right\rangle &= \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\sum_{\delta \in R}\int_{\delta(\alpha)}^{\delta(\beta)}
\det(\delta)^{k-1} f(u) j(\tilde{\delta},u)^{k}
\det(\delta)^{-k}
P(\tilde{\delta}(u), 1) \frac{\det(\delta) du}
{j(\tilde{\delta}, u)^2}.

Next we use that

(\delta P)(u,1) = j(\tilde{\delta}, u)^{k-2}
P(\tilde{\delta}(u), 1).

To see this, note that P(X,Y) = P(X/Y, 1)\cdot Y^{k-2}. Using this we see that

(\delta P)(X,Y) &= (P\circ \tilde{\delta})(X,Y)\\
&= P\left(\tilde{\delta}\left(\frac{X}{Y}\right),1\right)
\left(-c \cdot \frac{X}{Y} + a\right)^{k-2} \cdot Y^{k-2}.

Now substituting (u,1) for (X,1), we see that

(\delta P)(u,1) =
P(\tilde{\delta}(u), 1) \cdot (-c u + a)^{k-2},

as required. Thus finally

\langle T_n(f), P\{\alpha,\beta\}\rangle
&= \sum_{\delta \in R}\int_{\delta(\alpha)}^{\delta(\beta)}
f(u) j(\tilde{\delta},u)^{k-2}
P(\tilde{\delta}(u), 1) du\\
&= \sum_{\delta \in R} \int_{\delta(\alpha)}^{\delta(\beta)}
f(u) \cdot ((\delta P)(u,1)) du\\
&= \langle f, \, T_n(P\{\alpha,\beta\})\rangle.

Suppose that \Gamma is a finite index subgroup of \SL_2(\Z) such that if \eta = \abcd{-1}{0}{\hfill 0}{1}, then

\eta \Gamma \eta = \Gamma.

For example, \Gamma=\Gamma_1(N) satisfies this condition. There is an involution \iota^* on \sM_k(\Gamma) given by

(11)\iota^*(P(X,Y)\{\alpha,\beta\})
= -P(X,-Y)\{-\alpha,-\beta\},

which we call the star involution. On Manin symbols, \iota^* is

(12)\iota^*[P,(u,v)] = -[P(-X,Y), (-u,v)].

Let \sS_k(\Gamma)^+ be the +1 eigenspace for \iota^* on \sS_k(\Gamma), and let \sS_k(\Gamma)^- be the -1 eigenspace. There is also a map \iota on modular forms, which is adjoint to \iota^*.

Remark 1.43

Notice the minus sign in front of -P(X,-Y)\{-\alpha,-\beta\} in (11). This sign is missing in [Cre97a], which is a potential source of confusion (this is not a mistake, but a different choice of convention).

We now state the final result about the pairing, which explains how modular symbols and modular forms are related.

Theorem 1.44

The integration pairing \langle \cdot\, , \, \cdot \rangle induces nondegenerate Hecke compatible bilinear pairings

\sS_k(\Gamma)^+ \times S_k(\Gamma) \to \C
\qquad \text{and}\qquad
\sS_k(\Gamma)^- \times \Sbar_k(\Gamma) \to \C.

Remark 1.45

We make some remarks about computing the boundary map of Section Boundary Manin Symbols when working in the \pm 1 quotient. Let s be a sign, either +1 or -1. To compute \sS_k(N,\eps)_s, it is necessary to replace B_k(N,\eps) by its quotient modulo the additional relations \left[
\smallvtwo{-u}{\hfill v}\right] = s \left[\smallvtwo{u}{v}\right] for all cusps \smallvtwo{u}{v}. Algorithm 1.33 can be modified to deal with this situation as follows. Given a cusp x=\smallvtwo{u}{v}, proceed as in Algorithm 1.33 and check if either \smallvtwo{u}{v} or \smallvtwo{-u}{\hfill v} is equivalent (modulo scalars) to any cusp seen so far. If not, use the following trick to determine whether the \eps and s-relations kill the class of \smallvtwo{u}{v}: use the unmodified Algorithm 1.33 to compute the scalars \alp_1, \alp_2 and indices i_1, i_2 associated to \smallvtwo{u}{v} and \smallvtwo{-u}{\hfill v}, respectively. The s-relation kills the class of \smallvtwo{u}{v} if and only if i_1=i_2 but \alp_1\neq
s\alp_2.

Degeneracy Maps

In this section, we describe natural maps between spaces of modular symbols with character of different levels. We consider spaces with character, since they are so important in applications.

Fix a positive integer N and a Dirichlet character \eps : (\Z/N\Z)^*\ra \C^*. Let M be a positive divisor of N that is divisible by the conductor of \eps, in the sense that \eps factors through (\Z/M\Z)^* via the natural map (\Z/N\Z)^*\ra (\Z/M\Z)^* composed with some uniquely defined character \eps':(\Z/M\Z)^*\ra\C^*. For any positive divisor t of N/M, let T=\abcd{1}{0}{0}{t} and fix a choice D_t=\{T\gamma_i :
i=1,\ldots, n\} of coset representatives for \Gamma_0(N)\backslash
T\Gamma_0(M).

Remark 1.46

Note that [Mer94, Section 2.6] contains a typo: The quotient ” \Gamma_1(N)\backslash\Gamma_1(M)T ” should be replaced by ” \Gamma_1(N)\backslash T\Gamma_1(M) “.

Proposition 1.47

For each divisor t of N/M there are well-defined linear maps

\alp_t : \sM_k(N,\eps) \ra \sM_k(M,\eps'),&\qquad
\alp_t(x) = (tT^{-1})x = \mtwo{t}{0}{0}{1} x,\\
\beta_t : \sM_k(M,\eps') \ra \sM_k(N,\eps),&\qquad
\beta_t(x) = \sum_{T\gamma_i\in D_t} \eps'(\gamma_i)^{-1}T\gamma_i{} x.

Furthermore, \alp_t\circ \beta_t is multiplication by t^{k-2}\cdot [\Gamma_0(M) : \Gamma_0(N)].

Proof

To show that \alp_t is well defined, we must show that for each x\in\sM_k(N,\eps) and \gamma=\abcd{a}{b}{c}{d}\in\Gamma_0(N), we have

\alp_t(\gamma x  -\eps(\gamma) x)=0\in\sM_k(M,\eps').

We have

\alp_t(\gamma{}x) = \mtwo{t}{0}{0}{1}\gamma{}x
= \mtwo{a}{tb}{c/t}{d}\mtwo{t}{0}{0}{1} x
= \eps'(a)\mtwo{t}{0}{0}{1} x,

so

\alp_t(\gamma x  -\eps(\gamma) x)
= \eps'(a)\alp_t(x) - \eps(\gamma)\alp_t(x) = 0.

We next verify that \beta_t is well defined. Suppose that x\in\sM_k(M,\eps') and \gamma\in\Gamma_0(M); then \eps'(\gamma)^{-1}\gamma{}x = x, so

\beta_t(x)
&= \sum_{T\gamma_i\in D_t}
\eps'(\gamma_i)^{-1}T\gamma_i{}\eps'(\gamma)^{-1}\gamma{} x\\
&= \sum_{T\gamma_i\gamma\in D_t}
\eps'(\gamma_i\gamma)^{-1}T\gamma_i{}\gamma{} x.

This computation shows that the definition of \beta_t does not depend on the choice D_t of coset representatives. To finish the proof that \beta_t is well defined, we must show that, for \gamma\in\Gamma_0(M), we have \beta_t(\gamma{}x) = \eps'(\gamma)\beta_t(x) so that \beta_t respects the relations that define \sM_k(M,\eps). Using that \beta_t does not depend on the choice of coset representative, we find that for \gamma\in\Gamma_0(M),

\beta_t(\gamma{}x)
&= \sum_{T\gamma_i\in D_t} \eps'(\gamma_i)^{-1}T\gamma_i{} \gamma{} x\\
&= \sum_{T\gamma_i\gamma^{-1}\in D_t}
\eps'(\gamma_i\gamma^{-1})^{-1}T\gamma_i{}\gamma{}^{-1} \gamma{} x\\
&= \eps'(\gamma)\beta_t(x).\\

To compute \alp_t\circ\beta_t, we use that \#D_t = [\Gamma_0(N) : \Gamma_0(M)]:

\alp_t(\beta_t(x)) &=
\alp_t \left(\sum_{T\gamma_i}
\eps'(\gamma_i)^{-1}T\gamma_i x\right)\\
&= \sum_{T\gamma_i}
\eps'(\gamma_i)^{-1}(tT^{-1})T\gamma_i x\\
&= t^{k-2}\sum_{T\gamma_i}
\eps'(\gamma_i)^{-1}\gamma_i x\\
&= t^{k-2}\sum_{T\gamma_i} x \\
&= t^{k-2} \cdot [\Gamma_0(N) : \Gamma_0(M)] \cdot x.

The scalar factor of t^{k-2} appears instead of t, because t is acting on x as an element of \GL_2(\Q) and not as an an element of \Q.

Definition 1.48

The space \sM_k(N,\eps)_{\new} of new modular symbols is the intersection of the kernels of the \alp_t as t runs through all positive divisors of N/M and M runs through positive divisors of M strictly less than N and divisible by the conductor of \eps. The subspace \sM_k(N,\eps)_{\old} of old modular symbols is the subspace generated by the images of the \beta_t where t runs through all positive divisors of N/M and M runs through positive divisors of M strictly less than N and divisible by the conductor of \eps. The new and old subspaces of cuspidal modular symbols are the intersections of the above spaces with \sS_k(N,\eps).

Example 1.49

The new and old subspaces need not be disjoint, as the following example illustrates! (This contradicts [Mer94, pg. 80].) Consider, for example, the case N=6, k=2, and trivial character. The spaces \sM_2(\Gamma_0(2)) and \sM_2(\Gamma_0(3)) are each of dimension 1, and each is generated by the modular symbol \{\infty,0\}. The space \sM_2(\Gamma_0(6)) is of dimension 3 and is generated by the three modular symbols \{\infty, 0\}, \{-1/4, 0\}, and \{-1/2, -1/3\}. The space generated by the two images of \sM_2(\Gamma_0(2)) under the two degeneracy maps has dimension 2, and likewise for \sM_2(\Gamma_0(3)). Together these images generate \sM_2(\Gamma_0(6)), so \sM_2(\Gamma_0(6)) is equal to its old subspace. However, the new subspace is nontrivial because the two degeneracy maps \sM_2(\Gamma_0(6)) \ra
\sM_2(\Gamma_0(2)) are equal, as are the two degeneracy maps

\sM_2(\Gamma_0(6)) \ra \sM_2(\Gamma_0(3)).

In particular, the intersection of the kernels of the degeneracy maps has dimension at least 1 (in fact, it equals 1). We verify some of the above claims using Sage.

sage: M = ModularSymbols(Gamma0(6)); M
Modular Symbols space of dimension 3 for Gamma_0(6)
of weight 2 with sign 0 over Rational Field
sage: M.new_subspace()
Modular Symbols subspace of dimension 1 of Modular
Symbols space of dimension 3 for Gamma_0(6) of weight
2 with sign 0 over Rational Field
sage: M.old_subspace()
Modular Symbols subspace of dimension 3 of Modular
Symbols space of dimension 3 for Gamma_0(6) of weight
2 with sign 0 over Rational Field

Explicitly Computing \sM_k(\Gamma_0(N))

In this section we explicitly compute \sM_k(\Gamma_0(N)) for various k and N. We represent Manin symbols for \Gamma_0(N) as triples of integers (i, u,v), where (u,v)\in\P^1(\Z/N\Z), and (i, u,v) corresponds to [X^i Y^{k-2-i}, (u,v)] in the usual notation. Also, recall from Proposition 3.10 that (u,v) corresponds to the right coset of \Gamma_0(N) that contains a matrix \abcd{a}{b}{c}{d} with (u,v)\con (c,d) as elements of \P^1(\Z/N\Z), i.e., up to rescaling by an element of (\Z/N\Z)^*.

Computing \P^1(\Z/N\Z)

In this section we give an algorithm to compute a canonical representative for each element of \P^1(\Z/N\Z). This algorithm is extremely important because modular symbols implementations use it a huge number of times. A more naive approach would be to store all pairs (u,v)\in (\Z/N\Z)^2 and a fixed reduced representative, but this wastes a huge amount of memory. For example, if N=10^5, we would store an array of

2 \cdot 10^5 \cdot 10^5 = 20 \text{ billion integers}.

Another approach to enumerating \P^1(\Z/N\Z) is described at the end of [Cre97a, Section 2.2]. It uses the fact that is easy to test whether two pairs (u_0,v_0), (u_1,v_1) define the same element of \P^1(\Z/N\Z); they do if and only if we have equality of cross terms u_0 v_1 = v_0 u_1\pmod{N} (see [Cre97a, Prop. 2.2.1]). So we consider the 0-based list of elements

(13)(1,0), (1,1), \ldots, (1,N-1), (0,1)

concatenated with the list of nonequivalent elements (d,a) for d\mid N and a=1,\ldots, N-1, checking each time we add a new element to our list (of (d,a)) whether we have already seen it.

Given a random pair (u,v), the problem is then to find the index of the element of our list of the equivalent representative in \P^1(\Z/N\Z). We use the following algorithm, which finds a canonical representative for each element of \P^1(\Z/N\Z). Given an arbitrary (u,v), we first find the canonical equivalent elements (u',v'). If u'=1, then the index is v'. If u'\neq 1, we find the corresponding element in an explicit sorted list, e.g., using binary search.

In the following algorithm, a \pmod{N} denotes the residue of a modulo N that satisfies 0 \leq a < N. Note that we never create and store the list (13) itself in memory.

Algorithm 1.50

Given u and v and a positive integer N, this algorithm outputs a pair u_0, v_0 such that (u,v) \con (u_0,v_0) as elements of \P^1(\Z/N\Z) and s\in\Z such that (u,v) = (su_0,
sv_0) \pmod{\Z/n\Z}. Moreover, the element (u_0,v_0) does not depend on the class of (u,v), i.e., for any s with \gcd(N,s)=1 the input (su,sv) also outputs (u_0,v_0). If (u,v) is not in \P^1(\Z/N\Z), this algorithm outputs (0,0), 0.

  1. [Reduce] Reduce both u and v modulo N.
  2. [Easy (0,1) Case] If u=0, check that \gcd(v,N)=1. If so, return s=1 and (0,1); otherwise return 0.
  3. [GCD] Compute g=\gcd(u,N) and s,t\in\Z such that g=su+tN.
  4. [Not in P^1?] We have \gcd(u,v,N)=\gcd(g,v), so if \gcd(g,v)>1, then (u,v)\not\in\P^1(\Z/N\Z), and we return 0.
  5. [Pseudo-Inverse] Now g = su + tN, so we may think of s as “pseudo-inverse” of u \pmod{N}, in the sense that su is as close as possible to being 1 modulo N. Note that since g\mid u, changing s modulo N/g does not change su\pmod{N}. We can adjust s modulo N/g so it is coprime to N (by adding multiples of N/g to s). (This is because 1 = s u/g + tN/g, so s is a unit mod N/g, and the map (\Z/N\Z)^* \to (\Z/(N/g)\Z)^* is surjective, e.g., as we saw in the proof of Algorithm 4.28.)
  6. [Multiply by s] Multiply (u,v) by s, and replace (u,v) by the equivalent element (g,sv) of \P^1(\Z/N\Z).
  7. [Normalize] Compute the unique pair (g,v') equivalent to (g,v) that minimizes v, as follows:
    1. [Easy Case] If g=1, this pair is (1,v).
    2. [Enumerate and Find Best] Otherwise, note that if 1\neq t\in(\Z/N\Z)^* and tg \con g\pmod{N}, then (t-1)g \con 0\pmod{N}, so t-1 = kN/g for some k with 1\leq k \leq g-1. Then for t=1+kN/g coprime to N, we have (gt, vt) = (g, v+kvN/g). So we compute all pairs (g, v+kvN/g) and pick out the one that minimizes the least nonnegative residue of vt modulo N.
    3. [Invert s and Output] The s that we computed in the above steps multiplies the input (u,v) to give the output (u_0,v_0). Thus we invert it, since the scalar we output multiplies (u_0,v_0) to give (u,v).

Remark 1.51

In the above algorithm, there are many gcd’s with N so one should create a table of the gcd’s of 0,1,2,\ldots, N-1 with N.

Remark 1.52

Another approach is to instead use that

\P^1(\Z/N\Z)\isom \prod_{p\mid N} \P^1(\Z/p^{\nu_p}\Z),

where \nu_p = \ord_p(N), and that it is relatively easy to enumerate the elements of \P^1(\Z/p^n\Z) for a prime power p^n.

Algorithm 1.53

Given an integer N>1, this algorithm makes a sorted list of the distinct representatives (c,d) of \P^1(\Z/N\Z) with c\neq 0,1, as output by Algorithm 1.50.

  1. For each c=1,\ldots, N-1 with g = \gcd(c,N) > 1 do the following:
    1. Use Algorithm 1.50 to compute the canonical representative (u',v') equivalent to (c,1), and include it in the list.
    2. If g=c, for each d=2,\ldots, N-1 with \gcd(d,N)>1 and \gcd(c,d)=1, append the normalized representative of (c,d) to the list.
  2. Sort the list.
  3. Pass through the sorted list and delete any duplicates.

Explicit Examples

Explicit detailed examples are crucial when implementing modular symbols algorithms from scratch. This section contains a number of such examples.

Examples of Computation of \sM_k(\Gamma_0(N))

In this section, we compute \sM_k(\Gamma_0(N)) explicitly in a few cases.

Example 1.54

We compute V=\sM_4(\Gamma_0(1)). Because S_k(\Gamma_0(1))=0 and M_k(\Gamma_0(1))=\C E_4, we expect V to have dimension 1 and for each integer n the Hecke operator T_n to have eigenvalue the sum \sigma_3(n) of the cubes of positive divisors of n.

The Manin symbols are

x_0 = (0, 0, 0), \quad x_1 = (1, 0, 0), \quad x_2 = (2, 0, 0).

The relation matrix is

\left(\begin{matrix}
{}1&{}0&{}1\\
{}0&{}0&{}0\\
\hline
{}2&{}-2&{}2\\
{}1&{}-1&{}1\\
{}2&{}-2&{}2
\end{matrix}\right),

where the first two rows correspond to S-relations and the second three to T-relations. Note that we do not include all S-relations, since it is obvious that some are redundant, e.g., x+xS=0 and (xS) + (xS)S=xS + x=0 are the same since S has order 2.

The echelon form of the relation matrix is

\left(\begin{matrix}
\hfill{}1&\hfill{}0&\hfill{}1\\
\hfill{}0&\hfill{}1&\hfill{}0\\
\end{matrix}\right),

where we have deleted the zero rows from the bottom. Thus we may replace the above complicated list of relations with the following simpler list of relations:

x_0 + x_2  &= 0,\\
x_1 &= 0

from which we immediately read off that the second generator x_1 is 0 and x_0=-x_2. Thus \sM_4(\Gamma_0(1)) has dimension 1, with basis the equivalence class of x_2 (or of x_0).

Next we compute the Hecke operator T_2 on \sM_4(\Gamma_0(1)). The Heilbronn matrices of determinant 2 from Proposition 1.29 are

h_0&=\left(\begin{matrix}
\hfill{}1&\hfill{}0\\
\hfill{}0&\hfill{}2
\end{matrix}\right),\quad{}\\
h_1&=\left(\begin{matrix}
\hfill{}1&\hfill{}0\\
\hfill{}1&\hfill{}2
\end{matrix}\right),\quad{}\\
h_2&=\left(\begin{matrix}
\hfill{}2&\hfill{}0\\
\hfill{}0&\hfill{}1
\end{matrix}\right),\quad{}\\
h_3&=\left(\begin{matrix}
\hfill{}2&\hfill{}1\\
\hfill{}0&\hfill{}1
\end{matrix}\right).

To compute T_2, we apply each of these matrices to x_0, then reduce modulo the relations. We have

x_2\mtwo{1}{0}{0}{2} &= [X^2,(0,0)] \mtwo{1}{0}{0}{2}
x_2,\\
x_2 \mtwo{1}{0}{1}{2} &= [X^2,(0,0)] = x_2,\\
x_2 \mtwo{2}{0}{0}{1} &= [(2X)^2,(0,0)] = 4x_2,\\
x_2 \mtwo{2}{1}{0}{1} &= [(2X+1)^2,(0,0)] = x_0 + 4x_1 + 4x_2
\sim 3x_2.

Summing we see that T_2(x_2) \sim 9x_2 in \sM_4(\Gamma_0(1)). Notice that

9=1^3+2^3 = \sigma_3(2).

The Heilbronn matrices of determinant 3 from Proposition 1.29 are

h_0&=\left(\begin{matrix}
\hfill{}1&\hfill{}0\\
\hfill{}0&\hfill{}3
\end{matrix}\right),\quad{}
h_1=\left(\begin{matrix}
\hfill{}1&\hfill{}0\\
\hfill{}1&\hfill{}3
\end{matrix}\right),\quad{}\\
h_2&=\left(\begin{matrix}
\hfill{}1&\hfill{}0\\
\hfill{}2&\hfill{}3
\end{matrix}\right),\quad{}
h_3=\left(\begin{matrix}
\hfill{}2&\hfill{}1\\
\hfill{}1&\hfill{}2
\end{matrix}\right),\quad{}\\
h_4&=\left(\begin{matrix}
\hfill{}3&\hfill{}0\\
\hfill{}0&\hfill{}1
\end{matrix}\right),\quad{}
h_5=\left(\begin{matrix}
\hfill{}3&\hfill{}1\\
\hfill{}0&\hfill{}1
\end{matrix}\right),\quad{}\\
h_6&=\left(\begin{matrix}
\hfill{}3&\hfill{}2\\
\hfill{}0&\hfill{}1
\end{matrix}\right).

We have

x_2 \mtwo{1}{0}{0}{3} &= [X^2,(0,0)] \mtwo{1}{0}{0}{3} =
x_2,\\
x_2 \mtwo{1}{0}{1}{3} &= [X^2,(0,0)] = x_2,\\
x_2 \mtwo{1}{0}{2}{3} &= [X^2,(0,0)] = x_2,\\
x_2 \mtwo{2}{1}{2}{2} &= [(2X+1)^2,(0,0)] = x_0 + 4x_1 + 4x_2
\sim 3x_2,\\
x_2 \mtwo{3}{0}{0}{1} &= [(3X)^2,(0,0)] = 9x_2,\\
x_2 \mtwo{3}{1}{0}{1} &= [(3X+1)^2,(0,0)] = x_0 + 6x_1 + 9x_2 \sim 8x_2,\\
x_2 \mtwo{3}{2}{0}{1} &= [(3X+2)^2,(0,0)] = 4x_0 + 12x_1 + 9x_2 \sim 5x_2.\\

Summing we see that

T_3(x_2) \sim x_2 + x_2 + x_2 + 3x_2 + 9x_2 + 8x_2 + 5x_2
= 28 x_2.

Notice that

28=1^3+3^3 = \sigma_3(3).

Example 1.55

Next we compute \sM_2(\Gamma_0(11)) explicitly. The Manin symbol generators are

x_0= (0, 1),\quad\!\!\!\!
x_1= (1, 0),\quad\!\!\!\!
x_2= (1, 1),\quad\!\!\!\!
x_3= (1, 2),\quad\!\!\!\!
x_4= (1, 3),\quad\!\!\!\!
x_5= (1, 4),\quad\!\!

x_6= (1, 5),\quad\!\!\!\!
x_7= (1, 6),\quad\!\!\!\!
x_8= (1, 7),\quad\!\!\!\!
x_9= (1, 8),\quad\!\!\!\!
x_{10}= (1, 9),\quad\!\!\!\!
x_{11}= (1, 10).

The relation matrix is as follows, where the S-relations are above the line and the T-relations are below it:

\left(\begin{array}{ccccccccccccc}
1&1&0&0&0&0&0&0&0&0&0&0\\
0&0&1&0&0&0&0&0&0&0&0&1\\
0&0&0&1&0&0&1&0&0&0&0&0\\
0&0&0&0&1&0&0&0&1&0&0&0\\
0&0&0&0&0&1&0&0&0&1&0&0\\
0&0&0&0&0&0&0&1&0&0&1&0\\
\hline
1&1&0&0&0&0&0&0&0&0&0&1\\
0&0&1&0&0&0&1&0&0&0&1&0\\
0&0&0&1&0&1&0&0&1&0&0&0\\
0&0&0&0&1&0&0&1&0&1&0&0
\end{array}\right).

In weight 2, two out of three T-relations are redundant, so we do not include them. The reduced row echelon form of the relation matrix is

\left(\begin{array}{cccccccccccc}
1&1&0&0&0&0&0&0&0&0&0&0\\
0&0&1&0&0&0&0&0&0&0&0&0\\
0&0&0&1&0&0&0&0&0&0&-1&0\\
0&0&0&0&1&0&0&0&0&1&-1&0\\
0&0&0&0&0&1&0&0&0&1&0&0\\
0&0&0&0&0&0&1&0&0&0&1&0\\
0&0&0&0&0&0&0&1&0&0&1&0\\
0&0&0&0&0&0&0&0&1&-1&1&0\\
0&0&0&0&0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&0&0&0&0&0
\end{array}\right).

From the echelon form we see that every symbol is equivalent to a combination of x_1=(1,0), x_9=(1,8), and x_{10}=(1,9). (Notice that columns 1, 9, and 10 are the pivot columns, where we index columns starting at 0.)

To compute T_2, we apply each of the Heilbronn matrices of determinant 2 from Proposition 1.29 to x_1, then to x_9, and finally to x_{10}. The matrices are as in Example 1.54 above. We have

T_2(x_1) = 3(1,0) + (1,6) \sim 3x_1 - x_{10}.

Applying T_2 to x_9=(1,8), we get

T_2(x_9) =  (1,3)  + (1,4) + (1,5) + (1,10) \sim -2x_9.

Applying T_2 to x_{10}=(1,9), we get

T_2(x_{10})= (1,4)+(1,5) + (1,7) + (1,10) \sim -x_1 - 2x_{10}.

Thus the matrix of T_2 with respect to this basis is

T_2 =
\left(\begin{array}{ccc}
3&0&0\\
0&-2&0\\
-1&0&-2
\end{array}\right),

where we write the matrix as an operator on the left on vectors written in terms of x_1, x_9, and x_{10}. The matrix T_2 has characteristic polynomial

(x-3) (x+2)^2.

The (x-3) factor corresponds to the weight 2 Eisenstein series, and the x+2 factor corresponds to the elliptic curve E=X_0(11), which has

a_2 = -2 = 2+1 - \#E(\F_2).

Example 1.56

In this example, we compute \sM_6(\Gamma_0(3)), which illustrates both weight greater than 2 and level greater than 1. We have the following generating Manin symbols:

x_{0} &= [XY^4,(0,1)],\qquad x_{1} = [XY^4,(1,0)],\\
x_{2} &= [XY^4,(1,1)], \qquad x_{3} = [XY^4,(1,2)],\\
x_{4} &= [XY^3,(0,1)], \qquad x_{5} = [XY^3,(1,0)],\\
x_{6} &= [XY^3,(1,1)], \qquad
x_{7} = [XY^3,(1,2)],\\
x_{8} &= [X^2Y^2,(0,1)], \qquad
\!\!x_{9} = [X^2Y^2,(1,0)],\\
x_{10} &= [X^2Y^2,(1,1)], \qquad
\!\!\!\!x_{11} = [X^2Y^2,(1,2)],\\
x_{12} &= [X^3Y,(0,1)], \qquad
x_{13} = [X^3Y,(1,0)],\\
x_{14} &= [X^3Y,(1,1)], \qquad
x_{15} = [X^3Y,(1,2)],\\
x_{16} &= [X^4Y,(0,1)], \qquad
x_{17} = [X^4Y,(1,0)],\\
x_{18} &= [X^4Y,(1,1)], \qquad
x_{19} = [X^4Y,(1,2)].\\

The relation matrix is already very large for \M_6(\Gamma_0(3)). It is as follows, where the S-relations are before the line and the T-relations after it:

\left(\begin{array}{cccccccccccccccccccc}
1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\
0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\
0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1\\
0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0\\
0&0&0&0&1&0&0&0&0&0&0&0&0&-1&0&0&0&0&0&0\\
0&0&0&0&0&1&0&0&0&0&0&0&-1&0&0&0&0&0&0&0\\
0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&-1&0&0&0&0\\
0&0&0&0&0&0&0&1&0&0&0&0&0&0&-1&0&0&0&0&0\\
0&0&0&0&0&0&0&0&1&1&0&0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0&0&1&1&0&0&0&0&0&0&0&0\\\hline
1&0&0&1&0&0&0&-4&0&0&0&6&0&0&0&-4&0&1&0&1\\
1&1&0&0&-4&0&0&0&6&0&0&0&-4&0&0&0&1&0&0&1\\
0&0&2&0&0&0&-4&0&0&0&6&0&0&0&-4&0&0&0&2&0\\
0&1&0&1&0&-4&0&0&0&6&0&0&0&-4&0&0&1&1&0&0\\
0&0&0&1&1&0&0&-3&0&0&0&3&0&-1&0&-1&0&1&0&0\\
1&0&0&0&-3&1&0&0&3&0&0&0&-1&0&0&-1&0&0&0&1\\
0&0&1&0&0&0&-2&0&0&0&3&0&0&0&-2&0&0&0&1&0\\
0&1&0&0&0&-3&0&1&0&3&0&0&-1&-1&0&0&1&0&0&0\\
0&0&0&1&0&0&0&-2&1&1&0&1&0&-2&0&0&0&1&0&0\\
1&0&0&0&-2&0&0&0&1&1&0&1&0&0&0&-2&0&0&0&1\\
0&0&1&0&0&0&-2&0&0&0&3&0&0&0&-2&0&0&0&1&0\\
0&1&0&0&0&-2&0&0&1&1&0&1&-2&0&0&0&1&0&0&0\\
0&0&0&1&0&-1&0&-1&0&3&0&0&1&-3&0&0&0&1&0&0\\
1&0&0&0&-1&0&0&-1&0&0&0&3&0&1&0&-3&0&0&0&1\\
0&0&1&0&0&0&-2&0&0&0&3&0&0&0&-2&0&0&0&1&0\\
0&1&0&0&-1&-1&0&0&3&0&0&0&-3&0&0&1&1&0&0&0\\
0&1&0&1&0&-4&0&0&0&6&0&0&0&-4&0&0&1&1&0&0\\
1&0&0&1&0&0&0&-4&0&0&0&6&0&0&0&-4&0&1&0&1\\
0&0&2&0&0&0&-4&0&0&0&6&0&0&0&-4&0&0&0&2&0\\
1&1&0&0&-4&0&0&0&6&0&0&0&-4&0&0&0&1&0&0&1
\end{array}\right).

The reduced row echelon form of the relations matrix, with zero rows removed is

\hspace{-1em}\left(\begin{array}{cccccccccccccccccccc}
1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\
0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\
0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1\\
0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0\\
0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&3/16&-3/16\\
0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&-1/16&1/16\\
0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&1/2&-5/16&-3/16\\
0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&-1/2&3/16&5/16\\
0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&-1/6&1/12&1/12\\
0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&1/6&-1/12&-1/12\\
0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&1/4&-1/4\\
0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&-1/4&1/4\\
0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&-1/16&1/16\\
0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&3/16&-3/16\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&-1/2&3/16&5/16\\
0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&1&0&1/2&-5/16&-3/16\\
\end{array}\right).

Since these relations are equivalent to the original relations, we see how x_0, \ldots, x_{15} can be expressed in terms of x_{16}, x_{17}, x_{18}, and x_{19}. Thus \sM_6(\Gamma_0(3)) has dimension 4. For example,

x_{15} \sim \frac{1}{2} x_{17} -\frac{5}{16}x_{18}
- \frac{3}{16} x_{19}.

Notice that the number of relations is already quite large. It is perhaps surprising how complicated the presentation is already for \sM_6(\Gamma_0(3)). Because there are denominators in the relations, the above calculation is only a computation of \sM_6(\Gamma_0(3);\Q). Computing \sM_6(\Gamma_0(3);\Z) involves finding a \Z-basis for the kernel of the relation matrix (see Exercise 7.5).

As before, we find that with respect to the basis x_{16}, x_{17}, x_{18}, and x_{19}

T_2 =
\left(\begin{array}{cccc}
33&0&0&0\\
3&6&12&12\\
-3/2&27/2&15/2&27/2\\
-3/2&27/2&27/2&15/2
\end{array}\right).

Notice that there are denominators in the matrix for T_2 with respect to this basis. It is clear from the definition of T_2 acting on Manin symbols that T_2 preserves the \Z-module \sM_6(\Gamma_0(3)), so there is some basis for \sM_6(\Gamma_0(3)) such that T_2 is given by an integer matrix. Thus the characteristic polynomial f_2 of T_2 will have integer coefficients; indeed,

f_2 = (x-33)^2 \cdot (x+6)^2.

Note the factor (x-33)^2, which comes from the two images of the Eisenstein series E_4 of level 1. The factor x+6 comes from the cusp form

g = q - 6q^2 + \cdots \in S_6(\Gamma_0(3)).

By computing more Hecke operators T_n, we can find more coefficients of g. For example, the charpoly of T_3 is (x-1)(x-243)(x-9)^2, and the matrix of T_5 is

T_5 = \left(\begin{array}{cccc}
3126&0&0&0\\
240&966&960&960\\
-120&1080&1086&1080\\
-120&1080&1080&1086
\end{array}\right),

which has characteristic polynomial

f_5 = (x-3126)^2 (x-6)^2.

The matrix of T_7 is

T_7 = \left(\begin{array}{cccc}
16808&0&0&0\\
1296&5144&5184&5184\\
-648&5832&5792&5832\\
-648&5832&5832&5792
\end{array}\right),

with characteristic polynomial

f_7 = (x -16808)^2 (x + 40)^2.

One can put this information together to deduce that

g = q - 6q^2 + 9q^3 + 4q^4 + 6q^5 - 54q^6 - 40q^7 +\cdots.

Example 1.57

Consider \M_2(\Gamma_0(43)), which has dimension 7. With respect to the symbols

x_1&=(1,0),\quad x_{32}=(1,31),\quad
x_{33} = (1,32),\\
x_{39} &= (1,38),\quad
x_{40} = (1,39), \quad
x_{41} = (1,40),\quad
x_{42} = (1,41),

the matrix of T_2 is

T_2=\left(\begin{array}{ccccccc}
3&0&0&0&0&0&0\\
0&-2&-1&-1&-1&0&0\\
0&1&1&0&0&-2&-1\\
0&0&1&-1&1&0&0\\
0&0&0&2&1&2&1\\
0&0&-1&-1&-1&-2&0\\
-1&0&0&1&1&1&-1
\end{array}\right),

which has characteristic polynomial

(x-3)(x+2)^2 (x^2-2)^2.

There is one Eisenstein series and there are three cusp forms with a_2 = -2 and a_2 = \pm \sqrt{2}.

Example 1.58

To compute \sM_2(\Gamma_0(2004);\Q), we first make a list of the

4032 = (2^2 + 2) \cdot (3+1) \cdot (167 + 1)

elements (a,b)\in\P^1(\Z/2004\Z) using Algorithm 1.50. The list looks like this:

(0, 1), (1, 0), (1, 1), (1, 2),
\ldots,   (668, 1), (668, 3), (668, 5), (1002, 1).

For each of the symbols x_i, we consider the S-relations and T-relations. Ignoring the redundant relations, we find 2016 S-relations and 1344 T-relations. It is simple to quotient out by the S-relations, e.g., by identifying x_i with -x_i S = -x_j for some j (or setting x_i=0 if x_i S = x_i). Once we have taken the quotient by the S-relations, we take the image of all 1344 of the T-relations modulo the S-relations and quotient out by those relations. Because S and T do not commutate, we cannot only quotient out by T-relations x_i + x_i T + x_i T^2 = 0 where the x_i are the basis after quotienting out by the S-relations. The relation matrix has rank 3359, so \sM_2(\Gamma_0(2004);\Q) has dimension 673.

If we instead compute the quotient \sM_2(\Gamma_0(2004);\Q)^+ of \sM_2(\Gamma_0(2004);\Q) by the subspace of elements x-\eta^*(x), we include relations x_i+x_iI=0, where I=\abcd{-1}{0}{0}{1}. There are now 2016 S-relations, 2024 I-relations, and 1344 T-relations. Again, it is relatively easy to quotient out by the S-relations by identifying x_i and -x_i S. We then take the image of all 2024 I-relations modulo the S-relations, and again directly quotient out by the I-relations by identifying [x_i] with -[x_iI]=-[x_j] for some j, where by [x_i] we mean the class of x_i modulo the S-relations. Finally, we quotient out by the 1344 T-relations, which involves sparse Gauss elimination on a matrix with 1344 rows and at most three nonzero entries per row. The dimension of \sM_2(\Gamma_0(2004);\Q)^+ is 331.

Refined Algorithm for the Presentation

Algorithm 1.59

This is an algorithm to compute \sM_k(\Gamma_0(N);\Q) or \sM_k(\Gamma_0(N);\Q)^{\pm}, which only requires doing generic sparse linear algebra to deal with the three term T-relations.

  1. Let x_0,\ldots, x_n by a list of all Manin symbols.

  2. Quotient out the two-term S-relations and if the \pm quotient is desired, by the two-term \eta-relations. (Note that this is more subtle than just “identifying symbols in pairs”, since complicated relations can cause generators to surprisingly equal 0.) Let [x_i] denote the class of x_i after this quotienting process.

  3. Create a sparse matrix A with m columns, whose rows encode the relations

    [x_i] + [x_iT] + [x_iT^2] = 0.

    For example, there are about n/3 such rows when k=2. The number of nonzero entries per row is at most 3(k-1). Note that we must include rows for all i, since even if [x_i] = [x_j], it need not be the case that [x_i T] = [x_jT], since the matrices S and T do not commute. However, we have an a priori formula for the dimension of the quotient by all these relations, so we could omit many relations and just check that there are enough at the end—if there are not, we add in more.

  4. Compute the reduced row echelon form of A using Algorithm 7.6. For k=2, this is the echelon form of a matrix with size about n/3 \times n/4.

  5. Use what we have done above to read off a sparse matrix R that expresses each of the n Manin symbols in terms of a basis of Manin symbols, modulo the relations.

Applications

Later in This Book

We sketch some of the ways in which we will apply the modular symbols algorithms of this chapter later in this book.

Cuspidal modular symbols are in Hecke-equivariant duality with cuspidal modular forms, and as such we can compute modular forms by computing systems of eigenvalues for the Hecke operators acting on modular symbols. By the Atkin-Lehner-Li theory of newforms (see, e.g., Theorem 9.4), we can construct S_k(N,\eps) for any N, any \eps, and k\geq 2 using this method. See Chapter Modular Forms for more details.

Once we can compute spaces of modular symbols, we move to computing the corresponding modular forms. We define inclusion and trace maps from modular symbols of one level N to modular symbols of level a multiple or divisor of N. Using these, we compute the quotient V of the new subspace of cuspidal modular symbols on which a “star involution” acts as +1. The Hecke operators act by diagonalizable commuting matrices on this space, and computing the systems of Hecke eigenvalues is equivalent to computing newforms \sum a_n q^n. In this way, we obtain a list of all newforms (normalized eigenforms) in S_k(N,\eps) for any N, \eps, and k\geq 2.

In Chapter Computing Periods, we compute with the period mapping from modular symbols to \C attached to a newform f \in S_k(N,\eps). When k=2, \eps=1 and f has rational Fourier coefficients, this gives a method to compute the period lattice associated to a modular elliptic curve attached to a newform (see Section All Elliptic Curves of Given Conductor). In general, computation of this map is important when finding equations for modular \Q-curves, CM curves, and curves with a given modular Jacobian. It is also important for computing special values of the L-function L(f,s) at integer points in the critical strip.

Discussion of the Literature and Research

Modular symbols were introduced by Birch [Bir71] for computations in support of the Birch and Swinnerton-Dyer conjecture. Manin [Man72] used modular symbols to prove rationality results about special values of L-functions.

Merel’s paper [Mer94] builds on work of Shokurov (mainly [Sho80a]), which develops a higher-weight generalization of Manin’s work partly to understand rationality properties of special values of L-functions. Cremona’s book [Cre97a] discusses how to compute the space of weight 2 modular symbols for \Gamma_0(N), in connection with the problem of enumerating all elliptic curves of given conductor, and his article [Cre92] discusses the \Gamma_1(N) case and computation of modular symbols with character.

There have been several Ph.D. theses about modular symbols. Basmaji’s thesis [Bas96] contains tricks to efficiently compute Hecke operators T_p, with p very large (see Section Basmaji’s Trick), and also discusses how to compute spaces of half integral weight modular forms building on what one can get from modular symbols of integral weight. The author’s Ph.D. thesis [Ste00] discusses higher-weight modular symbols and applies modular symbols to study Shafarevich-Tate groups (see also [Aga00]). Martin’s thesis [Mar01] is about an attempt to study an analogue of analytic modular symbols for weight 1. Gabor Wiese’s thesis [Wie05] uses modular symbols methods to study weight 1 modular forms modulo p. Lemelin’s thesis [Lem01] discusses modular symbols for quadratic imaginary fields in the context of p-adic analogues of the Birch and Swinnerton-Dyer conjecture. See also the survey paper [FM99], which discusses computation with weight 2 modular symbols in the context of modular abelian varieties.

The appendix of this book is about analogues of modular symbols for groups besides finite index subgroups of \SL_2(\Z), e.g., for subgroup of higher rank groups such as \SL_3(\Z). There has also been work on computing Hilbert modular forms, e.g., by Lassina Dembel’e [Dem05] Hilbert modular forms are functions on a product of copies of \h, and \SL_2(\Z) is replaced by a group of matrices with entries in a totally real field.

Glenn Stevens, Robert Pollack and Henri Darmon (see [DP04]) have worked for many years to develop an analogue of modular symbols in a rigid analytic context, which is helpful for questions about computing with overconvergent p-adic modular forms or proving results about p-adic L-functions.

Finally we mention that Barry Mazur and some other authors use the term “modular symbol” in a different way than we do. They use the term in a way that is dual to our usage; for example, they attach a “modular symbol” to a modular form or elliptic curve. See [MTT86] for an extensive discussion of modular symbols from this point of view, where they are used to construct p-adic L-functions.

Exercises

Exercise 1.11

Suppose M is an integer multiple of N. Prove that the natural map (\Z/M\Z)^* \to (\Z/N\Z)^* is surjective.

Exercise 1.12

Prove that \SL_2(\Z)\to \SL_2(\Z/N\Z) is surjective (see Lemma 1.36).

Exercise 1.13

Compute \sM_{3}(\Gamma_1(3)). List each Manin symbol the relations they satisfy, compute the quotient, etc. Find the matrix of T_2. (Check: The dimension of \sM_{3}(\Gamma_1(3)) is 2, and the characteristic polynomial of T_2 is (x-3)(x+3).)

Exercise 1.14

Finish the proof of Proposition 1.38.

Exercise 1.15

  1. Show that if \eta = \abcd{-1}{0}{0}{1}, then \eta \Gamma \eta = \Gamma for \Gamma=\Gamma_0(N) and \Gamma=\Gamma_1(N).
  2. (*) Give an example of a finite index subgroup \Gamma such that \eta \Gamma \eta \neq \Gamma.