-
-
Save sellout/24a789153708ae5fca3b88a0efed8518 to your computer and use it in GitHub Desktop.
-- This illustrates (most of) a new(?) encoding of ad-hoc polymorphism using | |
-- dependent types and implicits. | |
-- | |
-- Benefits of this approach: | |
-- • can have multiple “ambiguous” instances without requiring things like | |
-- Haskell’s newtypes to distinguish them; | |
-- • can disambiguate instances with minimal information, rather than having to | |
-- know some arbitrary instance name as you would with a system like Scala’s | |
-- implicits; | |
-- • implementers don’t need to know/provide the full family of instances – they | |
-- can define only the basic properties of their operations and expect | |
-- aggregate structures to be made available automatically; | |
-- • new aggregate structures based on existing properties can be created | |
-- anywhere, and instances for previously-defined operations will be coalesced | |
-- as necessary without additional changes; and | |
-- • the “diamond problem” in inheritance-based systems is removed – if the | |
-- correct properties are specified, both `Monad` and `Traversable` can be | |
-- coalesced without either winding up with ambiguous `Functor` instances or | |
-- having to define a mega-instance like `Monad with Traversable`. | |
-- | |
-- Tradeoffs: | |
-- • requires dependent types, so not as available or ergonomic in all languages | |
-- • relies on laws, which are sometimes difficult to prove and unfamiliar to | |
-- many programmers (although, you could make the propeties empty, where they | |
-- act as just a tag that that property is satisfied or use `postulate` to | |
-- make it an axiom that you don’t require proof of) | |
-- | |
-- Written with assistance from (but not necessarily endorsement of) | |
-- @monoidmusician & @srdqty. | |
-- | |
-- NB: This probably _overuses_ implicits beyond the maximum benefit we get from | |
-- them. Figuring out where we should have explicit parameters may simplify | |
-- some things. | |
module OpClasses | |
||| First, we define a type for each individual property (or law) that we care | |
||| about. These should be as atomic as possible. | |
||| | |
||| TODO: Not sure why these need to be `data` declarations instead of | |
||| functions, but it currently helps with implicit resolution. | |
data Associative : {op : t -> t -> t} -> Type where | |
MkAssociative : ((a : t) -> (b : t) -> (c : t) -> op a (op b c) = op (op a b) c) | |
-> Associative {op} | |
data Commutative : {op : t -> t -> t} -> Type where | |
MkCommutative : ((a : t) -> (b : t) -> op a b = op b a) -> Commutative {op} | |
data LeftIdentity : {op : t -> t -> t} -> {leftUnit : t} -> Type where | |
MkLeftIdentity : ((a : t) -> op leftUnit a = a) | |
-> LeftIdentity {op} {leftUnit} | |
data RightIdentity : {op : t -> t -> t} -> {rightUnit : t} -> Type where | |
MkRightIdentity : ((a : t) -> op a rightUnit = a) | |
-> RightIdentity {op} {rightUnit} | |
data LeftCancellative : {op : t -> t -> t} -> Type where | |
MkLeftCancellative | |
: ((a : t) -> (b : t) -> (c : t) -> op a b = op a c -> b = c) | |
-> LeftCancellative {op} | |
data RightCancellative : {op : t -> t -> t} -> Type where | |
MkRightCancellative | |
: ((a : t) -> (b : t) -> (c : t) -> op a c = op b c -> a = b) | |
-> RightCancellative {op} | |
data LeftDistributive : {mul : t -> t -> t} -> {add : t -> t -> t} -> Type where | |
MkLeftDistributive | |
: ((a : t) -> (b : t) -> (c : t) -> mul a (add b c) = add (mul a b) (mul a c)) | |
-> LeftDistributive {mul} {add} | |
data RightDistributive : {mul : t -> t -> t} -> {add : t -> t -> t} -> Type where | |
MkRightDistributive | |
: ((a : t) -> (b : t) -> (c : t) -> mul (add a b) c = add (mul a c) (mul b c)) | |
-> RightDistributive {mul} {add} | |
||| • Structures are defined as records, with properties as fields. | |
||| • We use a function for the type to make the parameters to the record | |
||| implicit and potentially to make other changes – e.g., in `Identity` we | |
||| existentialize `unit` since it’s necessarily unique. | |
||| • We then use a function that specifies each of the components of the | |
||| structure as an implicit. These can be either individual properties or | |
||| other structures that are subsumed by the one being defined. This | |
||| function, with all arguments implicit serves as an implicit search | |
||| assistant. | |
record IdentityRec (t : Type) (op : t -> t -> t) (unit : t) where | |
constructor IdentityInfo | |
leftIdentity : LeftIdentity {op} {leftUnit=unit} | |
rightIdentity : RightIdentity {op} {rightUnit=unit} | |
Identity : {op : t -> t -> t} -> Type | |
Identity {t} {op} = (unit : t ** IdentityRec t op unit) | |
%hint | |
resolveIdentity | |
: {auto l : LeftIdentity {op} {leftUnit=unit}} | |
-> {auto r : RightIdentity {op} {rightUnit=unit}} | |
-> Identity {op} | |
resolveIdentity {l} {r} = (_ ** IdentityInfo l r) | |
record CancellativeRec (op : t -> t -> t) where | |
constructor CancellativeInfo | |
leftCancellative : LeftCancellative {op} | |
rightCancellative : RightCancellative {op} | |
||| NB: We could eliminate a lot of definitions like this if records took | |
||| implicit parameters. | |
Cancellative : {op : t -> t -> t} -> Type | |
Cancellative {op} = CancellativeRec op | |
||| NB: And we could eliminate a lot of definitions like this if record | |
||| constructors could take implicit parameters. | |
resolveCancellative | |
: {auto l : LeftCancellative {op}} | |
-> {auto r : RightCancellative {op}} | |
-> Cancellative {op} | |
resolveCancellative {l} {r} = CancellativeInfo l r | |
record DistributiveRec (mul : t -> t -> t) (add : t -> t -> t) where | |
constructor DistributiveInfo | |
leftDistributive : LeftDistributive {mul} {add} | |
rightDistributive : RightDistributive {mul} {add} | |
Distributive : {mul : t -> t -> t} -> {add : t -> t -> t} -> Type | |
Distributive {mul} {add} = DistributiveRec mul add | |
%hint | |
resolveDistributive | |
: {auto l : LeftDistributive {mul} {add}} | |
-> {auto r : RightDistributive {mul} {add}} | |
-> Distributive {mul} {add} | |
resolveDistributive {l} {r} = DistributiveInfo l r | |
record SemigroupRec (op : t -> t -> t) where | |
constructor SemigroupInfo | |
associative : Associative {op} | |
Semigroup : {op : t -> t -> t} -> Type | |
Semigroup {op} = SemigroupRec op | |
%hint | |
resolveSemigroup : {auto associative : Associative {op}} -> Semigroup {op} | |
resolveSemigroup {associative} = SemigroupInfo associative | |
record MonoidRec (t : Type) (op : t -> t -> t) (unit : t) where | |
constructor MonoidInfo | |
leftIdentity : LeftIdentity {op} {leftUnit=unit} | |
rightIdentity : RightIdentity {op} {rightUnit=unit} | |
associative : Associative {op} | |
Monoid : {op : t -> t -> t} -> Type | |
Monoid {t} {op} = (unit : t ** MonoidRec t op unit) | |
%hint | |
resolveMonoid | |
: {auto semigroup : Semigroup {op}} -> {auto identity : Identity {op}} -> Monoid {op} | |
resolveMonoid {semigroup} {identity} = | |
let (unit ** ident) = identity | |
in (unit ** MonoidInfo (leftIdentity ident) (rightIdentity ident) (associative semigroup)) | |
||| NB: We use implicit conversions to extract _unique_ substructures from | |
||| larger structures. However, implicit conversions don’t chain, so we also | |
||| need not only `groupToMonoid` but `groupToSemigroup` – this means there | |
||| are more conversions defined than we’d like, but it also has some | |
||| benefits. | |
||| A language that had row types or subtypes could handle this | |
||| automatically with either of those features. | |
implicit | |
monoidToSemigroup : Monoid {op} -> Semigroup {op} | |
monoidToSemigroup (_ ** monoid) = | |
resolveSemigroup {associative=associative monoid} | |
record SemiringRec (add : t -> t -> t) (mul : t -> t -> t) where | |
constructor SemiringInfo | |
additive : Semigroup {op=add} -- should be commutative | |
multiplicative : Monoid {op=mul} | |
leftDistributive : LeftDistributive {mul} {add} | |
rightDistributive : RightDistributive {mul} {add} | |
Semiring : {add : t -> t -> t} -> {mul : t -> t -> t} -> Type | |
Semiring {add} {mul} = SemiringRec add mul | |
%hint | |
resolveSemiring | |
: {auto distributive : Distributive {mul} {add}} | |
-> {auto additive : Semigroup {op=add}} | |
-> {auto multiplicative : Monoid {op=mul}} | |
-> Semiring {add} {mul} | |
resolveSemiring {distributive} {additive} {multiplicative} = | |
SemiringInfo | |
additive | |
multiplicative | |
(leftDistributive distributive) | |
(rightDistributive distributive) | |
||| NB: Here we take advantage of the lack of implicit chaining. | |
||| `semiringToMonoid` is unique, but `monoidToSemigroup . semiringToMonoid` | |
||| wouldn’t be because there’s another (commutative) semigroup in the | |
||| semiring. | |
implicit | |
semiringToMonoid : Semiring {mul} -> Monoid {op=mul} | |
semiringToMonoid semiring = multiplicative semiring | |
||| Now we can define operations that use our structures as constraints. Again, | |
||| encoded as implicit parameters. | |
combine3 : {auto semigroup : Semigroup {t} {op}} -> t -> t -> t -> t | |
combine3 {op} a b c = op a (op b c) | |
fold : {auto monoid : Monoid {t} {op}} -> List t -> t | |
fold {op} {monoid} ts = let (unit ** _) = monoid in foldr op unit ts | |
||| It’s also possible to have some properties implied by combinations of other | |
||| properties. | |
%hint | |
resolveLeftIdentity | |
: {auto commutative : Commutative {op}} | |
-> {auto rightIdentity : RightIdentity {op} {rightUnit}} | |
-> LeftIdentity {op} {leftUnit=rightUnit} | |
resolveLeftIdentity {rightUnit} {commutative=MkCommutative com} {rightIdentity=MkRightIdentity id} = | |
MkLeftIdentity (\a => rewrite (com rightUnit a) in (id a)) | |
%hint | |
resolveRightIdentity | |
: {auto commutative : Commutative {op}} | |
-> {auto leftIdentity : LeftIdentity {op} {leftUnit}} | |
-> RightIdentity {op} {rightUnit=leftUnit} | |
resolveRightIdentity {leftUnit} {commutative=MkCommutative com} {leftIdentity=MkLeftIdentity id} = | |
MkRightIdentity (\a => rewrite (com a leftUnit) in (id a)) | |
||| Then we prove _only_ the individual properties for each operation. These are | |
||| our “instances”, which we should really have a mechanism for guaranteeing | |
||| uniqueness of, and they shouldn’t have to be named. | |
%hint | |
addAssoc : Associative {op=Prelude.Nat.plus} | |
addAssoc = MkAssociative plusAssociative | |
%hint | |
addLeftIdent : LeftIdentity {op=Prelude.Nat.plus} {leftUnit=Z} | |
addLeftIdent = MkLeftIdentity plusZeroLeftNeutral | |
%hint | |
addCommutative : Commutative {op=Prelude.Nat.plus} | |
addCommutative = MkCommutative plusCommutative | |
||| Thanks to `resolveRightIdentity`, we don’t need to define `addRightIdent, | |
||| because it’s been implied by ``addLeftIdent` and `addCommutative`. | |
||| And, finally, we can call the constrained functions (`combine3` and `fold`) | |
||| on `Nat`s, and the aggregate “instances” on larger structures are | |
||| automatically built out of our individual implicit definitions of various | |
||| properties. | |
test_s : combine3 (S Z) (S (S Z)) (S (S (S Z))) = 6 | |
test_s = Refl | |
test_m : fold [S Z, S (S Z), S (S (S Z))] = 6 | |
test_m = Refl | |
||| But things can get ambiguous. Here we have a second associative operation on | |
||| Naturals. | |
%hint | |
mulAssoc : Associative {op=Prelude.Nat.mult} | |
mulAssoc = MkAssociative multAssociative | |
||| Unfortunately, Idris implicit search just looks backward for the closest | |
||| definition. So even though we have both `Semigroup {op=plus}` and | |
||| `Semigroup {op=mult}`, it finds the multiplication one. Ideally, this | |
||| wouldn’t compile, and we’d have to specify which one we want. Instead, it | |
||| will always use the multiplication one, and we’d need to explicitly specify | |
||| `plus` if we want to use that one. | |
test_s2 : combine3 (S (S Z)) (S (S Z)) (S (S (S Z))) = 12 | |
test_s2 = Refl | |
||| If we had the uniqueness constraint we’d like, we’d have to specify the `op`: | |
test_s2b : combine3 {op=Prelude.Nat.plus} (S (S Z)) (S (S Z)) (S (S (S Z))) = 7 | |
test_s2b = Refl | |
||| And sometimes we need to “extract” a smaller structure impllied by a larger | |
||| one. This can be explicit or implicit. Here, we pull out the distinct | |
||| semigroup and monoid from the semiring. We _should_ need to specify the | |
||| semigroup here (because both semiring operations form semigroups), but | |
||| _shouldn’t_ have to specify the monoid for `fold`, as only one of the | |
||| operations forms a monoid. | |
%hint | |
multLeftIdent : LeftIdentity {op=Prelude.Nat.mult} {leftUnit=S Z} | |
multLeftIdent = MkLeftIdentity multOneLeftNeutral | |
%hint | |
mulRightIdent : RightIdentity {op=Prelude.Nat.mult} {rightUnit=S Z} | |
mulRightIdent = MkRightIdentity multOneRightNeutral | |
%hint | |
mulLeftDist : LeftDistributive {mul=Prelude.Nat.mult} {add=Prelude.Nat.plus} | |
mulLeftDist = MkLeftDistributive multDistributesOverPlusRight | |
%hint | |
mulRightDist : RightDistributive {mul=Prelude.Nat.mult} {add=Prelude.Nat.plus} | |
mulRightDist = MkRightDistributive multDistributesOverPlusLeft | |
mixed : {auto semiring : Semiring {t}} -> t -> t -> List t -> t | |
mixed {semiring} x y = | |
combine3 {semigroup=additive semiring} x y . fold | |
||| This is overconstrained, but it illustrates how we can take advantage of the | |
||| implicit conversions. `combine3` expects a `Semigroup`, but we provide it a | |
||| `Monoid` (and this _must_ be explicit, because there are multiple possible | |
||| semigroups in the context). The only available `Monoid` still gets used for | |
||| `fold`. | |
mixed_m : {auto semiring : Semiring {t}} -> t -> t -> List t -> t | |
mixed_m {semiring} x y = | |
combine3 {semigroup=multiplicative semiring} x y . fold | |
test_sr : mixed (S (S Z)) (S (S Z)) [S Z, S (S Z), S (S (S Z))] = 10 | |
test_sr = Refl | |
||| This shows how the `Monoid` is automatically converted to the `Semigroup` | |
||| needed by `combine3`. | |
||| | |
||| TODO: This shows a place we have to be careful (that we haven’t been careful | |
||| with above). Notice in `Monoid {t} {op}`, we never use `op` elsewhere, | |
||| so we shouldn’t need to call out that parameter. However, if we omit | |
||| it, then `test_m2` can’t specify the operation to use for the monoid | |
||| and would have to reify a monoid instance somehow. We have to be | |
||| careful because it’s a problem that isn’t apparent until we try to | |
||| _use_ `lessMixed`. We have a similar problem with `unit`, where there’s | |
||| no way to expose the unit as a disambiguator (I don’t think), since | |
||| it’s in the `Sigma` of the monoid. | |
lessMixed : {auto monoid : Monoid {t} {op}} -> t -> t -> List t -> t | |
lessMixed x y = combine3 x y . fold | |
test_m2 : lessMixed {op=Prelude.Nat.mult} 2 2 [1, 2, 3] = 24 | |
test_m2 = Refl |
{-# language ConstraintKinds | |
, FlexibleContexts | |
, FlexibleInstances | |
#-} | |
-- | This is a sort-of introductory version of operation classes. It lacks many | |
-- features, but is encodable in fairly vanilla Haskell, so is hopefully | |
-- fairly comprehensible. There are richer versions in Dependent Haskell and | |
-- other languages, but it is hopefully useful to understand some of the | |
-- concepts from this before looking into the deeper versions (which also have | |
-- to jump through more hoops because of various language limitations). | |
module WeakOpClasses where | |
import Prelude hiding (Monoid, product) | |
import Data.Kind (Type) | |
import Data.Monoid (Sum(..), Product(..)) | |
import Numeric.Natural | |
-- | This type class gives us a place to put a single operation, without any | |
-- properties not implied by the type. This wouldn’t need to exist if the | |
-- operation could live in the parameters. | |
class Closed a where | |
op :: a -> a -> a | |
-- @ These classes are empty. In a dependently-typed language, they would | |
-- contain the laws that require proofs, but here they are simply tags that | |
-- _assert_ that the property holds. | |
class Closed a => Associative a | |
class Closed a => Commutative a | |
class Closed a => LeftIdentity a where | |
leftUnit :: a | |
class Closed a => RightIdentity a where | |
rightUnit :: a | |
class (Closed (Sum a), Closed (Product a)) => LeftDistributive a | |
class (Closed (Sum a), Closed (Product a)) => RightDistributive a | |
type Identity a = (LeftIdentity a, RightIdentity a) | |
unit :: Identity a => a | |
unit = leftUnit | |
type Distributive a = (LeftDistributive a, RightDistributive a) | |
type Semigroup = Associative | |
type Monoid a = (Semigroup a, Identity a) | |
type CommutativeSemigroup a = (Commutative a, Semigroup a) | |
type CommutativeMonoid a = (Commutative a, Monoid a) | |
type Semiring a = | |
(Distributive a, CommutativeSemigroup (Sum a), Monoid (Product a)) | |
type Rig a = (Semiring a, CommutativeMonoid (Sum a)) | |
instance Closed (Sum Natural) where | |
op = (+) | |
instance Closed (Product Natural) where | |
op = (*) | |
instance Associative (Sum Natural) | |
instance Commutative (Sum Natural) | |
instance LeftIdentity (Sum Natural) where | |
leftUnit = 0 | |
instance RightIdentity (Sum Natural) where | |
rightUnit = 0 | |
instance Associative (Product Natural) | |
instance Commutative (Product Natural) | |
instance LeftIdentity (Product Natural) where | |
leftUnit = 1 | |
instance RightIdentity (Product Natural) where | |
rightUnit = 1 | |
instance LeftDistributive Natural | |
instance RightDistributive Natural | |
combine3 :: Semigroup t => t -> t -> t -> t | |
combine3 x y z = op (op x y) z | |
fold :: Monoid t => [t] -> t | |
fold = foldr op unit | |
-- | FIXME: How do we select which `Monoid` we want to use in this case? | |
sum :: Rig t => [t] -> t | |
sum = getSum . fold . fmap Sum | |
-- | This one can _only_ use the multiplicative, as `fold` requires a `Monoid` | |
-- and a `Semiring` has only one of those. So we _shouldn’t_ need to tell the | |
-- compiler which instance we want, but because of the `newtype` hack, we do | |
-- need to. | |
product :: Semiring t => [t] -> t | |
product = getProduct . fold . fmap Product | |
mixed :: Semiring t => t -> t -> [t] -> t | |
mixed x y = getSum . combine3 (Sum x) (Sum y) . Sum . product | |
mixed_m :: Semiring t => t -> t -> [t] -> t | |
mixed_m x y = getProduct . combine3 (Product x) (Product y) . Product . product | |
lessMixed :: Monoid t => t -> t -> [t] -> t | |
lessMixed x y = combine3 x y . fold |
Attempting a point-by-point response …
This is cool stuff!
Thanks!
We could name the generic Monoid operation fold, and then sum could be fold on an additive Monoid and prod would be fold on a multiplicative Monoid, that way the name can help disambiguate the operation from the context.
Yeah, what I have here is just for illustration I could see what you’re suggesting being implemented as
fold : {auto monoid : Monoid {t} {op}} -> List t -> t
fold {op} {monoid} ts = let (unit ** _) = monoid in foldr op unit ts
sum : {auto rig : Rig {t}} -> List t -> t
sum {rig} = fold {monoid=additive rig}
prod : {auto semiring : Semiring {t}} -> List t -> t
prod {semiring} = fold {monoid=multiplicative semiring}
Largely the issue is guessing the correct operation to use. For each operation, there is at most one Semigroup instance (because proofs are irrelevant), one Monoid instance (because two-sided identities are unique), and one Group instance (because two-sided inverses are unique). When there's less structure (e.g. multiple left identities), it doesn't always work out so nicely, but those are rarer instances.
Yeah, part of the intent of this approach is to make the type exactly specific enough to have only one instance. However, that starts becoming verbose when you’re naming all the operations everywhere. The use of implicits makes it possible to not fully specify the type, but you can always specify more of the type to eliminate any ambiguity.
In this spirit, I wonder if the identity elements even need to be mentioned at the type level – is there any chance they could just be fields of the Monoid? Same thing with inverses in Group. They have proof content (that the identity or inverse exists) and computational content (the manner of finding them), but they are unique so it isn't so important "which" one is chosen. That way Semigroup, Monoid, and Group could all have the same type-level signature.
Yeah, Semigroup
, Monoid
, and Group
do all have the same type-level signature. Notice how LeftIdentity
and RightIdentity
have the unit in the type (because, as you mention below, there can be multiple identities, so to be unique, we need that in the type). But (two-sided) Identity
removes the unit from the type, existentializing it. And so Monoid
also doesn’t have the unit in the type.
This does have a negative impact currently (IMO) – you can’t disambiguate a monoid by saying Monoid {unit=1}
, because the unit is not part of the type.
I don't think distributive operations are unique, but I don't have any good counterexamples off the top of my head …
I’m not sure what you mean by “I don't think distributive operations are unique”.
Would it possible to have Ring = Distributive {t} Group Monoid {add} {mul}? That is, we specify parametrically at the type level what each operation is required to satisfy (besides distributive with respect to each other). Would that kill the implicit inference?
I’m not sure what you mean here either – sorry. I mean, I think that’s what I’m already doing
resolveRing
: {auto distributive : Distributive {mul} {add}}
-> {auto group : CommutativeGroup {add}}
-> {auto monoid : Monoid {mul}}
-> Ring {add} {mul}
That’s saying, “in order to have a ring, we must have an operation that forms a monoid and a second operation that forms a commutative group, where the first operation distributes over the second.” And one of the neat things is that we should be able to specify resolveRing
in any of the equivalent ways and have it work in exactly the same cases.
resolveRing
: {auto distributive : Distributive {mul} {add}}
-> {auto group : CommutativeMonoid {add}}
-> {auto additiveInverse : Inverse {add}}
-> {auto semigroup : Semigroup {mul}}
-> {auto multiplicativeIdentity : Identity {mul}}
-> Ring {add} {mul}
With larger sets of axioms, it turns out that only a subset of the full laws is sufficient to prove everything, since some laws follow from combinations of others. As a simple example, commutative operations obviously satisfying the left laws iff they satisfy the right laws … This can probably be handled with alternative constructors as needed, not a big deal.
Yeah, this has been in the back of my head. I should try it and see where I run into issues. Because what you really want is for it to be possible to coalesce properties as well as structures – i.e., given Commutative {op=x}
and LeftIdentity {op=x} {leftUnit=q}
, there should be a RightIdentity {op=x} {rightUnit=q}
“for free”. But if someone explicitly defines all three, then we have duplicate instances showing up, and if we enforce uniqueness, do we run into complexity with forcing a user to find a set of properties to define that doesn’t create any duplication.
Speaking of left and right, I mentioned in PM that I've always wondered whether the two could be unified, so there's one common definition. The simplest way to do that would be to specify the left version and have right op = left (flip op). But that's not very satisfying because it feels very asymmetric (even though flip (flip op) = op … does Idris have function extensionality?).
I wonder if there’s a way to do this that would actually give us a benefit in any way. I mean, in terms of brevity or clarity.
Great, now when can we use categories? 😄
Yeah, I’ve left them out of here for clarity, but I’m hoping all of this continues to work in the face of universe polymorphism, etc.
For the sake of completeness, if we want to characterize left and right identities fully, there seem to be four cases:
No identities One or more left identities, but no right identities: proofwiki.org/wiki/More_than_one_Left_Identity_then_no_Right_Identity One or more right identities, but no left identities One (unique) two-sided identity: proofwiki.org/wiki/Identity_is_Unique
The first and last are obviously easiest to target in this framework, and are the most useful algebraically. It should be possible to classify inverses like that, but I haven't done the research yet, and it is likely as boring as most things with a two-sided identity and inverses are Groups. (see e.g. proofwiki.org/wiki/Left_Inverse_for_All_is_Right_Inverse)
I think this is already how identities are modeled here.
@MonoidMusician – I think I see why you thought the unit
was in the Monoid
type – because it’s in the MonoidRec
type, but while I would like to be able to remove the unit
there, that’s more of an internal detail.
My biggest issue with the current implementation of this is the approx. n(n-1)/2 implicit
conversion definitions needed to handle the “subtyping” of structures. At the very least, it’s easy to forget some without noticing until a user manages to hit that exact case.
Also, the errors that come out when an instance isn’t available are generally terrible. In fact, sometimes non-sensical as far as I can tell.
Added a Haskell version that omits all the dependently-typed parts of the approach, but hopefully makes it accessible to a wider audience. It also falls back on the newtype
instance hack, so it misses out on some of the “magic” instance resolution.
I also wonder if it would be possible to model these algebraic structures as a metalanguage. It would make a nice finite category, right? The main thing to take care of would be products. Specify some basic properties and name them, then form products freely, name some of the important products. Add some morphisms (theorems about which properties imply which other) and uniqueness theorems (about identities and inverses, I guess). You could use this to generate code: generate the properties, the named products, generate the transitive "instances" from this, as well as minimal constructors like I mentioned about.