The Coarse Structure of the Representation Algebra of a Finite Monoid

Let M be a monoid, and let L be a commutative idempotent submonoid. We show that we can find a complete set of orthogonal idempotents ̂ L 0 of the monoid algebra A ofM such that there is a basis of A adapted to this set of idempotents which is in one-toone correspondence with elements of the monoid. The basis graph describing the Peirce decomposition with respect to ̂ L 0 gives a coarse structure of the algebra, of which any complete set of primitive idempotents gives a refinement, and we give some criterion for this coarse structure to actually be a fine structure, which means that the nonzero elements of the monoid are in one-to-one correspondence with the vertices and arrows of the basis graph with respect to a set of primitive idempotents, with this basis graph being a canonical object.


Introduction
When we speak of a coarse structure, we mean the decomposition of the monoid algebra into Peirce components corresponding to the elements of a commutative idempotent submonoid.The fine structure is then the refinement which occurs when one breaks down each of these idempotents into a sum of primitive idempotents.The basic idea of this work is to try to understand the semigroup representation theory insofar as possible without delving into the group theory and to determine criteria for monoids for which there is a coarse structure which corresponds to the fine structure.
We assume that the field  over which we are taking representations of a monoid  is of characteristic which does not divide the order of any of the maximal subgroups of , as  runs over the idempotents of .Then Maschke's theorem applies and the group algebras of the maximal subgroups are all semisimple.
The irreducible representations of the semigroup correspond to the disjoint union of the irreducible representations of the various maximal subgroups.Thus, one condition we will need for the coarse structure to coincide with the fine structure is that the monoid be aperiodic, that is to say, that maximal subgroups be trivial.
The quiver of a finite dimensional algebra is a combinatorial object of central importance in studying its representations theory.There has been in recent years a great deal of interest in determining properties of the quiver and relations of the monoid algebra, as in the work of Saliola, [1] and of Margolis and Steinberg [2].The object used in this paper, the basis graph, is closely related to the quiver.For some purposes, particularly deformation theory, the basis graph is preferable, and in this work we claim that for certain types of monoids, the basis graph gives a much clearer picture of the relationship between the monoid and the monoid algebra than the quiver does.These are the monoids for which the coarse structure described above can be obtained for a complete set of primitive idempotents, and one of the main results of the paper is that this happens when the monoid is aperiodic and has commutative idempotents.Examples of this phenomenon given below include the matrix monoid and the set of order-preserving and extensive maps from a finite set into itself.
Let us comment that there has also been an other recent work on trying to unravel the algebra structure of the monoid algebra from invariants of the monoid.Thiéry [3] is able to derive information about the Cartan matrix of the algebra from combinatorial Cartan matrix  of the monoid, and similarly the third section in Denton's thesis [4] takes up this theme.The anonymous referee pointed out the similarity of our concept to the third section of [5], and we have in fact switched from our original notation to a version of their notation "lfix" and "rfix" to emphasize this similarity.In the case given in our Section 5 in which the coarse and fine structures coincide, the entries in the matrix  appear to be the sizes of what we call the Peirce sets of the monoid, defined in our Section 3.
Let  be a finite monoid, and let () be its set of idempotents.We recall that the J-class of an element  of  is the set of all  ∈  such the  = .A Jclass is regular if it contains an idempotent.Two idempotents  and  are called conjugate if they lie in the same J-class, and this happens if and only if there are elements  and  in the monoid such that  =  and  = .

The Basis Graph of a Finite Dimensional Algebra
For any complete, orthogonal set E of idempotents in a finite dimensional algebra  with Jacobson radical , we can define a basis graph [6] to be the directed graph with one vertex labeled   for each idempotent   and the following loops and arrows: (i)  0  − 1 loops of weight zero, (ii)    arrows or loops of weight  for  ̸ =  or  > 0, where ( Definition 1.A basis  of  which is a union of bases for the different Peirce components     is said to respect the idempotent set E.
A refinement E  of E is a complete set of orthogonal idempotents such that every element  of E is a sum of elements of E  .If we have such a refinement, then we get a refinement of the basis graph, in which some of the loops are replaced by vertices and arrows.For example, the upper triangular 2 × 2 matrices over a field , considered with respect to a set of idempotents containing only the identity matrix, would have a basis graph consisting of one vertex and two loops, one of weight zero and one of weight 1.We can take a refinement of the idempotent set consisting of the idempotent diagonal matrices  11 and  22 .The refined basis graph will have two vertices corresponding to the two idempotents and one arrow of weight one.
The basis graph for any complete orthogonal set of primitive idempotents is an invariant of the algebra, since any two such sets are conjugate.If the algebra is basic, then the quiver is with respect to right modules just the basis graph with all arrows of weight greater than one erased, since the quiver is defined by considering the dimension of (  (/ 2 )  ) (if one uses left modules, one must take the dual).If the algebra is not basic, then all matrix blocks, together with the arrows of weight zero representing matrix units, must be shrunk to points in the quiver, with a corresponding coalescence of arrows.

Finite Monoids with a Chosen Commutative Idempotent Submonoid
Our aim in this section is to construct an alternative basis of  which still corresponds one-to-one to the set of elements of the monoid but which behaves well with respect to the basis graph defined above.In particular, we will want the idempotents to be orthogonal and the other basis elements to respect the idempotent set as in Definition 1.
We now consider a finite monoid (, ⬦) with a chosen commutative idempotent submonoid .If  has an absorbing zero element , we assume that  is also in .A trivial choice would be to take  to be the identity element, together with  if it exists.Because of the commutativity, the set  is partially ordered by the relation  ⪯  ⇔  ⬦  =  and is a semilattice under this ordering, with the greatest lower bound of two elements being their product.Since  contains a maximal element, the identity, then  is also a lattice, since the least upper bound of two elements  and  is the greatest lower bound of all the elements of  which dominate both  and , and this set is nonempty because the identity always dominates both  and .Definition 2. Let  be a monoid with operation ⬦.Let  be a commutative idempotent submonoid.For an element  of the monoid , the left -idempotent lfix  () is the product of all the idempotents  in the lattice  such that  ⬦  = .Similarly, the right -idempotent rfix  () is the product of all the idempotents  in the lattice  such that  ⬦  = .At least one such idempotent always exists, since the identity of the monoid satisfies the condition.
Remark 3. If  has an absorbing zero , then  is not the left or right idempotent of any element except itself.
We now pass to the reduced monoid algebra  of the monoid  over a field , where  is a field sufficiently large that the quotient of  by its radical  splits completely as a sum of matrix blocks over .One might, for example, take  to be algebraically closed.If  does not contain an absorbing zero element, then we set  =  and  =  and define  0 =  and  0 = .If  does contain a necessarily unique zero element , then we let  be the quotient / with subalgebra  = /.This reduction is made because the full monoid algebra  is isomorphic to a direct product of an algebra isomorphic to  and a copy of the  representing the ideal , which gives, among other disadvantages, an algebra which is decomposable and disconnected quiver, where algebra representation theorists prefer to work with indecomposable algebras.From the point of view of representation theory the object of study is the algebra .We define sets  0 =  − {} and  0 =  − {}.Note that in the second case  0 and  0 may no longer be monoids, since the product of two elements different from  might be .
For each  ∈ , we let  ∈  be equal  in the case without the absorbing zero and  +  in the case with the absorbing zero.This will allow us to treat both cases together.We take as bases of  and  the sets  0 = { |  ∈  0 } and  0 = { ∈  0 }, respectively.Definition 4. A multiplicative basis B in a finite dimensional algebra  is a basis such that the product of two elements of the basis is either zero or an element of the basis.Thus B is a multiplicative basis of  if and only if  is the semigroup algebra of B.
The basis  0 is a multiplicative basis of the reduced monoid algebra  defined above and  0 is a multiplicative basis for .
By the Munn-Ponizovskii theorem [7], the isomorphism classes of simple modules are in one-to-one correspondence with the simple modules of the various isomorphism classes of maximal subgroups, one for each J-class, and we also assume that the characteristic of  does not divide the orders of any of the maximal subgroups.
It is a standard fact about rings that if  and  are idempotents satisfying  ⪯  then  −  is an idempotent orthogonal to .By a construction of Solomon [8], the reduced monoid algebra , which is a subalgebra of , has a basis of orthogonal idempotents in one-to-one correspondence with the elements of  0 , obtained by a process of Moebius inversion.These idempotents are actually primitive in , but they are not necessarily primitive in , as the example of taking  to be the identity demonstrates.The Moebius function (, ) for elements of a partially ordered set is defined recursively to be 0 if , to be 1 if  = , and to be − ∑ ≤< (, ) for  < .Definition 5.For each  ∈  0 , let ê = ∑ ≤ (, ) be the corresponding primitive idempotent of .The set of all the ê will be denoted by L0 .
For any  ∈  0 , let Note that each idempotent is the sum of an element  for  ∈  0 with a linear combination of lower idempotents .The set of all â will be denoted by M0 , so that for each  ∈  0 , we have an element  of the multiplicative basis  0 of  and an element of the set M0 , also of .Our aim in this section is to prove that M0 is an alternative basis of  which behaves well with respect to the basis graph.Lemma 6.An idempotent  ∈  is its own left and right idempotent.
Proof.Obviously  ⬦  = .However, if  is any other element of  0 such that  ⬦  = , then by the definition of the partial ordering on idempotents  ⪯ .This shows that  is a minimal right idempotent, and the proof on the left is dual.
We define a collection of subsets of  by This is the full collection of -Peirce sets of  if there is no zero in the monoid.When there is a zero , there would be an additional Peirce set {}, which we ignore because it vanishes under the passage to the reduced monoid algebra.In mild abuse of notation, we refer to the sets in  0 as  0 -Peirce sets.
We impose a linear ordering on  0 which is subordinate to the natural partial ordering on  0 ×  0 and then a linear ordering on  0 which is subordinate to the ordering on  0 .We impose the same ordering on  0 .We now make the following claim.
Lemma 7. The linear transformation  :  →  which maps  ∈  0 to â, written in the ordered basis  0 , is upper triangular with 1 on the diagonal and thus invertible.
Proof.By the construction of the primitive idempotents of  0 , the idempotent ê is given by  with coefficient 1 and a linear combination of lower idempotents.Thus â is the sum of  with coefficient 1 and a linear combination of elements of  0 from strictly smaller  0 -Peirce sets.This gives the desired result.Proof.By Lemma 7, the matrix mapping  0 to M0 is invertible.Since  0 is a basis of , so is M0 .
Since every â has left and right idempotents from L0 , the set M0 respects the set of idempotents L0 as in Definition 1, and each â lies in an L0 -Peirce component.Since they are linearly independent, the number of â in any L0 -Peirce component of M0 is less than or equal to the dimension, but since the total number of elements in M0 is equal to the dimension of , each inequality must in fact be equality.This proposition demonstrates the possibility of choosing a coarse set of idempotents which, after conversion to appropriate representation algebra idempotents using inclusionexclusion methods, will give a one-to-one correspondence between semigroup elements and basis elements for a basis of the monoid algebra respecting L0 , thus indicating that the aspects of the monoid algebra visible at this level of refinement are natural to the semigroup.To get the maximum use out of the proposition, one should choose the coarse set of idempotents as fine as possible without losing commutativity.The best situation of all is that in which the set of idempotents L0 is actually a complete set of primitive idempotents, and we will address that case in the last section.

Examples
In general M0 will not be a multiplicative basis, and even when it is, as in Example 1 below, the multiplication will not necessarily coincide with the multiplication in the monoid, that is, we may have âb ̸ = â.
Our first example is a very natural one, in which L0 is a complete set of primitive idempotents.
Example 1 (monoid with multiplicative basis in which the ⋅-operator does not respect multiplication).Let  be the monoid of  ×  matrix units {  }  ,=1 , together with an identity element 1 and a zero element .The monoid  is taken to be the matrix units of the diagonal, together with 1 and , while  0 =  − {}.The reduced monoid algebra  is isomorphic to the semisimple algebra  ⊕   ().In this case, since the poset of idempotents is given by  ≤   for all  = 1, . . ., , the inclusion and exclusion processes are very simple, giving   −  for  = 1, . . ., ,  for , and 1 − ∑  =1   + ( − 1).Dividing  by the ideal  to get , we then have Ê =   =   +  for  = 1, . . ., .The basis graph is a directed graph with an isolated vertex for 1 and  vertices   ,  = 1, . . ., , with one arrow in each direction between any pair of the  vertices.In this case the basis L0 is multiplicative, since Ê Êℓ = Ê  ℓ and 1 ⋅ Ê = Ê ⋅ 1 = 0. Note that this last equation shows that we do not always have â ⋅ b = â, even when the basis is multiplicative.This monoid has three J-classes, one containing all the matrix units, one containing the identity, and one containing only .The quiver of  consists of two points, since the equations   =     and   =     show that all of the idempotents in  0 are conjugate, as defined in the Introduction.
Example 1 represents one extreme possibility, in which almost all idempotents of  0 are in a single J-class.The opposite extreme, in which each element of  0 corresponds to a different J-class, is actually a common situation for the important class of linear algebraic monoids, which have been extensively studied by Putcha [9].Definition 9. A cross-section lattice  for  is a commutative idempotent submonoid which contains exactly one element from each regular J-class.
Linear algebraic monoids arise as the Zariski closures of linear algebraic groups, but there are more general examples.The cross section lattices in the linear algebraic monoids arise as the set of idempotents inside a maximal torus.Most linear algebraic monoids are infinite, but they have finite versions defined over finite fields.
Example 2 (monoid with cross section lattice).If  is any finite dimensional algebra, then its monoid (, ⋅) is a linear algebraic monoid.Over a finite field, this will be a finite monoid.Let { 1 , . . .,   } be a complete set of primitive idempotents for  as an algebra, ordered so that { 1 , . . .,   }, with  ≤  being representatives of conjugacy classes of idempotents.If, as a concrete instance of this construction,  was the  ×  matrix monoid over a field of two elements, then the matrix units   ,  = 1, . . .,  would be such a set of primitive idempotents, but they would all be conjugate, so we would have  = 1.
Most finite monoids do not have cross section lattices.When a cross-section lattice exists, it is a natural but not inevitable choice for the commutative idempotent submonoid .
Example 3 (another monoid with cross section lattice).Let (, ⪯) be a poset with  elements  1 , . . .,   arranged so that   ⪯   implies that  ≤ .Let  be a submonoid of the  ×  matrices over the field F 2 of two elements, in which an element   = 0 if      .In particular, all the matrices in  are upper triangular.Let  be the set of all 2  diagonal matrices, which are all idempotents because 0 and 1 are idempotents of the field.The submonoid  contains the zero matrix and is a cross section lattice as in Definition 9 above.
The idempotents ê in L0 are primitive only when the diagonal matrix has only one nonzero entry.Otherwise the submonoid  has a nontrivial maximal subgroup , and in order to find a set of primitive idempotents for êê, we will have to decompose  into its irreducible representations.
In an earlier paper, [10], we proposed an even coarser set of idempotents based on intervals along the diagonal.
There are many important examples of cross section lattices.The one we will consider with particular care, which motivated our search for a general coarse structure, is the monoid of endofunctions on a finite set.This monoid has no zero element.
Example 4 (obtaining a complete decomposition of the identity into orthogonal idempotents for a specific representation of   ).Let   be the monoid of functions from the set  = {1, 2, . . ., } into itself, acting on the right, with composition as the operation.The monoid   has   elements.This monoid has a natural filtration by the two-sided ideals with   =   .We define a representation  of   as operating from the right on a vector space  = ⟨V 1 , . . ., V  ⟩ by sending  ∈   to (), the matrix with entries 1 in positions (,  ⋅ ), and 0 everywhere else.Then  ∈   if and only if () has rank .If we denote the constant map with unique image  by , then () is the matrix with all ones in column .These are the only matrices in the image of  of rank 1, so These constant maps will play a special role in what follows, so we note for future reference that if  is an arbitrary element of   , then We now fix a distinguished index 1 and define embeddings   of   in   by letting   () be the element of   which acts like  on {1, . . ., } and is constant, equal to 1, on the remaining elements.This embedding is a homomorphism of semigroups, though not a homomorphism of monoids.If we let id  represent the identity element of   , then we set Note that  1 =  1 (id 1 ) = 1 and   = id  .
Since each   is the image of an idempotent under a homomorphism of semigroups, it is itself an idempotent.Furthermore, so that, under the ordering of idempotents in a semigroup by which we have that the ordering of   is according to the ordering of the indices and by the order of J-classes.Furthermore, the set  = {  } is, in fact, a cross section lattice in   .We define which is the Moebius inversion for a chain.Then the idempotent   is orthogonal to all the   with  <  and thus to all the idempotents   with  < .A simple telescoping argument shows that so { 1 , . . .,   } is a complete, orthogonal set of idempotents for .
Definition 10.The right rank rr() of a map f is the width of its image, that is, the largest number in the image.If rr() = , then the last  −  columns of () are zero and the th column is nonzero.The left rank lr() of a map is the index of the largest number  such that ⋅ ̸ = 1⋅, unless the map is a constant; in such case the left rank is 1.In terms of matrices, this means that  is the lowest row of () which is not equal to the first row.
In fact, the right rank of  is the smallest  such that  ∘   = , since   acts as the identity on the first  columns and the "tail" of   does not act because all those columns are zero.The product is composition, but because the action is from the right, first  acts and then   .The left rank of  is the smallest  such that   ∘  = , since the first  elements go as they go in , and the remainder go to 1 ⋅ .The constant map  has right rank  and left rank 1.The left and right ranks of   are .
The basis graph of  1 is the point  1 .The basis graph of  2 with respect to the idempotent set { 1 ,  2 } consists of the two vertices, an arrow from  1 to  2 given by the map 2, and a loop at  2 given by the map  which transposes 1 and 2. This will be the appropriate coarse structure.If we refine this by splitting  2 into two primitive idempotents, then the loop vanishes to produce the second idempotent and the arrow comes out of only one of the primitive idempotents, corresponding to the nontrivial representation of the corresponding maximal subgroup  2 .
It should be pointed out that, where a choice is involved in the selection of the lattice , the operation of sending  to â may behave strangely with regard to the J-classes.
In the example of the mappings,   , all the constant maps are idempotents lying in the lowest J-class.By the choice we made of the lattice , 1 = 1 remains an idempotent, whereas all the other ĵ =  − 1 become arrows in the radical.All the various choices are conjugate, obtained one from the other by permutations of the underlying set {1, 2, . . ., } on which the mappings act.From the point of view of algebra representation theory this is quite natural, since the difference between two conjugate idempotents in a basic algebra is a linear combination of nonlooped arrows.However, it does mean that not all members of a J-class have similar representations in the basis graph.
One solution to this problem, then, would be to restrict ourselves to J-trivial monoids, those for which there is a unique idempotent in each regular J-class.However, Example 1 above shows that this would be too restrictive.
Returning to the question of multiplicative bases, it is not hard to check, by direct calculation, in the case of  2 of Example 4 above that the basis M0 is multiplicative.We turn now to  3 and the 14-dimensional component with right and left idempotents  3 , in order to show that M0 is not, in general, a multiplicative basis.The elements not in the radical correspond to the six permutations of 1, 2, and 3, and the elements in the radical-squared correspond under the mapping of â to  to all the arrangements of two distinct numbers such that 3 is among them, and the first and last numbers are different.These are 311, 322, 133, 233, 331, 332, 113, and 223.Let us set  = 231 and calculate A slightly tedious calculation of the sixteen elements in the product shows that eight cancel each other out, and the remaining eight correspond to Thus the basis is not multiplicative.

The Fine Structure
In semigroup theory, a pseudovariety is a collection of finite semigroups closed under homomorphic images, subsemigroups, and finite direct products.Although we have been studying monoids, the property of being a monoid is not stable under taking subsemigroups, as we see from Example 1, since the matrix units together with  are a semigroup.Since every semigroup can be converted into a monoid by adjoining an identity element, we will content ourselves with trying to find pseudovarieties for which there is a coarse structure which coincides with the fine structure.The point at which this correspondence most readily breaks down is at the point where the local subgroups are broken down into irreducible representations.Thus for discussing the fine structure we will consider only semigroups for which the maximal subgroups are all trivial.This is the pseudovariety A of aperiodic semigroups.
Example 5 (a J-trivial monoid).Consider the monoid of all partial 1-1 maps from  = {1, 2, . . ., } to itself for some natural number .This has a submonoid , consisting of all the partial maps  which are order preserving and extensive, which means that if  is defined for some  ∈ , then  ≥ .These monoids are not just aperiodic, they are actually Jtrivial [5].
For this particular example, the connected components of the basis graph are determined by a natural number  which is the common dimension of the domain and image.The right and left idempotents are the identity maps on the domain and image.The quiver is the Hasse diagram of the partial order on subsets  and  of size  given by  ≥  if there is an orderpreserving, extensive  :  → .The basis graph is the entire diagram generated by the partial order.
The representation theory of J-trivial monoids has received considerable attention recently, [5].There is a homomorphism from a J-trivial monoid into its lattice of idempotents, and when this is extended to the monoid algebras, the kernel corresponds to the radical in the representation algebra.There is a one-to-one correspondence between the idempotents of the monoid and the irreducible representations in the representation algebra.See [5] for details.
The idempotents of a J-trivial monoid do not, in general, commute and thus we cannot apply our theory to every Jtrivial monoid.However, in Example 5 the idempotents are partial identity maps, which do commute with each other.Thus the idempotents in that example form a lattice in which the meet is given by multiplication.We conclude that the idempotents ê are all primitive and thus, by Proposition 8, the coarse and fine structures coincide.
The simplest case in which we can find a semilattice of idempotents is a case like Example 5 where all the idempotents commute, the variety IC.Ash [11] proved that if a semigroup lies in this pseudovariety, then it is a homomorphic image of a subsemigroup of an inverse semigroup, one for which to each element  is associated a unique element  −1 such that  −1  =  and  −1  −1 =  −1 .By uniqueness, since for any idempotent the choice  =  −1 satisfies this condition, each idempotent is its own partial inverse.
Example 6 (monoid where the coarse and fine structures coincide).Let  = {1, 2, . . ., } for some natural number , and let  be the set of one-to-one order-preserving partial maps  :  → .The idempotents in this monoid are the partial identity maps and thus commute with each other.It is aperiodic because, for any idempotent , the only element in  is , itself, for if  is the common domain and range of , the only order-preserving one-to-one map from  to itself is the identity.Although aperiodic, the monoid is not J-trivial.The J-classes are determined by the number  of elements in , so that the J-class of  contains the idempotents corresponding to all the !/!( − )! subset of  with  elements, as well as all order-preserving partial maps whose domain and range have  elements.Since for each pair of sets  and  with  elements there is a unique oneto-one order-preserving map between them, the principal factor corresponding to each J-class has a reduced algebra isomorphic to the matrix algebra of the size of the number of idempotents in J. Since these matrix algebras generate the matrix blocks of the Munn-Ponizovskii theorem, the idempotents ê are in fact primitive, since they correspond to the diagonal idempotents of the matrix blocks.
For any semigroup , an ideal  is a subset closed under multiplication by elements of , and the Rees quotient semigroup / is the semigroup obtained by replacing the elements of  by a single zero element .The principal factor of a semigroup corresponding to a J-class  is /, where  =  −  and if  = 0 then we understand the quotient to be .The semigroups which can appear as principal factors are of a limited number of types.They can be semigroups with zero multiplication, 0-simple semigroups, or ideal simple semigroups.
Proposition 11.Let  be a monoid with zero .If the monoid is aperiodic and has commuting idempotents, the coarse structure of  with respect to the set of all non-zero idempotents will coincide with its fine structure.There is a oneto-one correspondence between the non-zero elements of the monoid and the set of vertices and arrows of the basis graphs.Remark 12.The monoid thus belongs to A ∩ IC, the intersection of the pseudovariety A of aperiodic semigroups and the pseudovariety IC of semigroups with commuting idempotents.We mention this notation because some of the theorems we rely on are phrased in the language of varieties.
Proof.Because we are in the pseudovariety IC we can choose  to be the set of all idempotents and it will be a submonoid.We let { 1 , . . .,   } be the set of regular J-classes in , where  1 is the class of 1 and   is the class of .By the Munn-Ponizovskii theorem, the radical quotient of the representation algebra  of  is of the form where the   are the maximal subgroups of the regular Jclass   and   is a number depending on the structure of the class   .We have assumed that the maximal subgroups are trivial, so each regular J-class gives a single matrix block.Furthermore, by the standard reference [12], we find that   for a monoid in IC is exactly the number of idempotents in the J-class.By general properties of the semigroups in the pseudovariety IC, the set Reg() of elements in regular J-classes is a submonoid of .Since each of these regular J-classes contains an idempotent, the regular principal factors cannot have zero multiplication, and since we have assumed that  has a zero, it cannot be ideal simple, so it must be 0-simple, and, in fact, completely 0-simple since it contains a regular regular J-class.A regular semigroup with commuting idempotents is an inverse semigroup; that is, every element  has a unique inverse  −1 belonging to the same J-class as .Each regular factor is thus also an inverse subsemigroup and is, in fact, a Brandt semigroup, a completely 0-simple semigroup which is an inverse semigroup.It is also aperiodic, and a finite aperiodic Brandt semigroup is a semigroup of matrix units.Thus, for each regular J-class  for a monoid in IC ∩ A, the principal factor is a matrix unit semigroup.If  and  are equivalent idempotents, there must be elements  ∈  and  ∈  such that  =  and  = .In this case  and  are unique and are precisely the matrix units which transfer from  to  and back.The degree is equal to the number of idempotents equivalent to .The basis graph for the corresponding matrix block is the complete directed graph with a number of vertices equal to the degree of the matrix block, where for a directed graph, completeness gives one arrow in each direction between any pair of vertices.
Thus for each regular J-class, the orthogonal idempotents obtained by inclusion-exclusion from the idempotents in the J-class are equal in number to the dimension of the matrix block.If so, they must be primitive, and the coarse structure determined by this choice of idempotents does indeed coincide with the fine structure.Thus, by applying Proposition 8, we have a one-to-one correspondence between the elements of the monoid and the vertices and arrows of the basis graph.
If we were to require the semigroup idempotents to be primitive, then the regular part would have to be an annihilating sum of Brandt semigroups, as mentioned in the remark after Theorem 3 of [13].However, since we only require the algebra idempotents after inclusion-exclusion to be primitive, we have access to a much richer collection of semigroups.

Proposition 8 .
The set M0 = {â |  ∈  0 } is a basis for  whose elements lie in the components of the Peirce decomposition of  by the idempotent set L0 .Thus the dimension of each Pierce component ê  ê can be obtained from the monoid by calculating the number of elements with left idempotent   and right idempotent   .