Books have been written on the theory of computation but I shall keep it simple.
Computation is the transformation of sequences of symbols according to precise rules.
A series of such rules is often called an algorithm. An algorithm is a recipe for solving a particular problem by transforming symbols according to rules.
Turning a string of symbols representing English words into Morse code for example is a simple transformation. For ‘S’ transform to three dots (· · ·). For ‘O’ transform to three dashes (− − −). The ‘SOS’ distress signal thus becomes ‘· · · − − − · · ·’.
A slightly more complicated but still simple example is deciding whether a numerical symbol (e.g. 2022) represents a leap year.
To figure this out you divide the year by four. If the remainder (or “modulus”) is zero, you have a leap year. At least, that is the rule for the Julian calendar introduced in 45 BC which the Romans called 709 ab urbe condita (from the foundation of the city).
The leap year algorithm for the Gregorian calendar (in use today in most of the world) for years since 1562 is more complex.
Take the year, divide by 400.
If the modulus is zero, you have a leap year.
If not, divide the year by 100, if the modulus is zero you have a normal year.
If not, divide the year by 4, if the modulus is zero, you have a leap year.
If not, you have a normal year.
As more than one person has said, computation comes down to if/then statements. The catch is that algorithms require the rules for transforming sequences of symbols to be “well-defined.” Symbols for numbers and logical operations are “well-defined” but other symbols such as “art” or “beauty” that refer to things of interest to humans are not. Thus there are practical limits to what you can do with algorithms.
Understanding the limits of computation is important. There are things computers are good at and things computers are hopeless at. It’s important to know the difference. Speaking generally, computers are good at logic and mathematics but not so good at “common sense.”
This is why computers can be better at humans when playing games like Chess and Go because these can be reduced to logic and mathematics relatively easily. However, we are yet to see robots (with computers for brains) that are able to walk into people’s houses and make a cup of tea in an unfamiliar environment. Common sense is really hard to program!