No, I don't mean regular expressions.  I mean that thing that you see in "modern" programming languages where you'll have two functions of the same name that take different arguments do different things  We use it all the time.  The plus symbol, "+", often has two separate meanings, and is often processed by two different operations:

add(number, number) -> mathematical addition
add(string, string) -> textual concatenantion

Now, most programming languages provide this differentiation by convention.  C++ is fairly unusual in that it permits it explicitly; you can use the same name for a function with different inputs (the arguments), and the C++ compiler will connect any calls to that function name correctly depending upon the arguments you provide at compile time.

But the modern languages like Haskell do this explicitly and without any special needs.  You can re-use function names all day, and as long as every function has unique input and output types, Haskell will associate the right functions in the right way.  (I think Haskell is annoying in that it doesn't even matter what order you make these declarations in, but some people seem to like it.)

It turns out this behavior is, for programming, really old.  Like, 1960's-era old.  The λ-calculus that McCarthy turned into Lisp back in 1960 described it.  McCarthy didn't know how to turn pattern matching into a general-purpose feature, so we ended up with "if" statements and conditionals and all the rest to decide these things by hand, but the original math seemed to require automatic pattern matching.  Take the "sequence" operation: This is when a function has a list of things to do, several statements in a row, and returns the value of the last expression.  The math for this reads:

Ε[[π]]ρκσ = Ε[[π]]ρκσ

Ε[[π π+]]ρκσ = (Ε[[π]]ρ λεσ1.(Ε+[[π+]]ρκσ1) σ

The left side of the equation denotes "the meaning of..."  In this case, line one is "The meaning of a single statement is the meaning of a single statement after we've taken into account the heap and the stack."  And line two is "The meaning of a single statement with more statements following is the meaning of the first statement, which we then discard to derive the meaning of the rest of the statements, all the while taking into account the heap and the stack."  There's actually more to that, as the environment and memory can change from statement to statement, but that's already handled by our understanding of what ρ and σ mean.

In a common language like python or javascript, we would have an 'if' statement to say "If this is a single statement we're done and return, else process the first and iterate."   But in Haskell you could just write those explicitly, and it would work; the compiler knows exactly what you mean, and does the right thing, because there's no ambiguity there at all.

I just wish I could write them in English, which is my native tongue, and not have to translate from Greek in my head.