Acid-base equilibria

Acids and bases are easy to deal with when they are strong, as you can assume that any reaction goes to completion. However, weak acids and bases don’t always ionise completely, so you need to look at the equilibria to fully understand them.

Acids dissociate in water to give a proton, (or hydronium ion) and the conjugate base ion.

 HA ⇄ H+ + A

 We can put this into an equilibrium expression, which gives us Ka, the acid ionisation constant:

Ka is an indication of the strength of the acid, the stronger the acid, the larger the Ka.

The same thing applies to bases;

B + H2O ⇄ HB+ + OH

Leading to the Kb expression:

Ka and Kb are the inverse of one another for a given equilibrium.


Most of the stuff for acid-base equilibria is the same as for any other chemical equilibrium problem, however buffers do require a bit more discussion.

A buffer is a solution that can resist changes in pH when small amounts of acid or base are added. They are very important in a broader sense as they regulate the pH of your body and the oceans. Buffers contain a conjugate pair of a weak acid or base, generally in equal parts, for example a carbonate buffer would look like this:

H2CO3 ⇄ H+ + HCO3

where you have added H2CO3 and NaHCO3 to water. The buffer can shift its equilibrium between the acid and conjugate base when small amounts of acid or base are added. The pH will be regulated, because you have such large amounts of H2CO3 and HCO3 to start with, any small change will not affect the overall equilibrium to a large degree.

The Henderson-Hasselbalch equation is used to calculate the pH of a given buffer, and is just a rearrangement of the Ka expression:



Leave a comment

Filed under Chem 1, Inorganic Chemistry, Physical Chemistry

Chemical equilibrium

Not all chemical reactions go to completion, most go backwards and forwards around a point somewhere in the middle, eventually coming to a stop. Of course, the reaction doesn’t cease at this point, the reaction continues to hover around the equilibrium point, which is why it is known as dynamic equilibrium.

Technically, equilibrium is defined as the point at which the rates of the forward and reverse reactions are equal.

We express the point of equilibrium using the equilibrium expression:

for the reaction aA + bB ⇄ cC + dD. K is the equilibrium constant, the larger it is, the more the reaction favours the products. Obtaining the equilibrium constant is simply a matter of substituting the equilibrium concentrations into the equilibrium constant.

Alternatively, you may be given K and the initial concentrations and asked to find the equilibrium concentration. There is an example here,

because I can’t get the example to format here. The procedure to answer these problems is as follow:

  1. Set up a table of concentrations showing the initial, change and final concentrations for all products and reactants.
  2. Substitute the “equilibrium concentrations” (including the x) into your equilibrium expression.
  3. Solve!

And for those of you who have forgotten year 10 maths – the quadratic equation and a reminder of how to use it is here. For those of you who do remember, you can see a cat on a vacuum cleaner.


1 Comment

Filed under Chem 1, Physical Chemistry

Crystal field theory

We have been discussing metal complexes using the valence bond theory, but while it is a useful method for discussing simple bonds, it cannot account for the colours and magnetic properties of metal complexes. To understand why metal complexes are such beautiful colours, we have to use crystal field theory.

Crystal field theory gives us information on the electronic structure of the metal atom, considering how the d orbitals will be affected by the ligands.

For an octahedral complex, we imagine the ligands are point charges that sit on the Cartesian (x,y,z) axes. The d orbitals are arranged around the nucleus as shown below, with two orbitals pointing directly towards the ligands, and the other three in-between.

The d(x2-y2) and d(z2), which point at the ligands, experience electrostatic repulsion and are therefore at higher energy than the d(xy), d(yz) and d(xz) orbitals. This means the d orbitals are no longer degenerate, and the orbital diagram is split:

The energy difference between the high and low energy orbital is ∆O (the ligand field splitting parameter, if you’re feeling wordy), and it is the size of ∆O that dictates the spin, colour and magnetic properties. The size of the splitting parameter is determined field strength of the ligand. A strong field ligand, such as CN, will give a large splitting parameter and a weak field ligand, I for example, will have a small value for ∆. The ligands are arranged into the spectrochemical series, indicating the strength of the ligand field.

Now that we know about splitting, the next question is how do the d electrons distribute themselves between the non-degenerate orbitals? For example, we have two Fe2+ complexes, [Fe(CN)­6]4- and [Fe(H2O)6]2+. We get two different splitting diagrams:

When we go to fill the orbitals according to Hund’s rule, we should half fill each orbital before going back and pairing the electrons, as it requires energy to put two electrons in an orbital. This is fine, if ∆ is low, as with [Fe(H2O)6]2+, and we get a high spin complex, with a higher number of unpaired electrons. However, if ∆ is large, as with [Fe(CN)­6]4-­, the pairing energy is lower than the energy required to get an electron into the high energy orbitals, so the complex is low spin.

High spin complexes are paramagnetic, as the unpaired electrons give the complex an inherent magnetism. The magnetic moment can be calculated according to the following equation: μs.o. = √{4S(S+1)}. This is the spin-only formula, where S is the number of unpaired electrons.

The colour of transition metal complexes also depends on the magnitude of ∆, as the energy of absorbed light will correspond to the energy difference between the low and high energy orbitals. If ∆ is small, the complex will absorb lower energy light, (red) and appear blue,  the opposite is true for complexes with high ∆.

Some links: (warning – animated gifs)







Leave a comment

Filed under Chem 2, Inorganic Chemistry

Youtube Friday

Making salt… with FIRE!

Leave a comment

Filed under Uncategorized

Transition metal complexes

Transition metal atoms and ions can act as Lewis acids, accepting electron pairs from molecules or ions with electrons to spare, (Lewis bases).


A complex ion is a metal ion with lewis bases attached through covalent bonds, a metal complex or coordination compound is the same thing, but neutral.

Ligands are the lewis bases attached the metal ion. They may be small molecules or ions

The coordination number is the number of ligands attached to the metal ion.

 Denticity refers to how many bonding sites exist on the ligand – or how many times a single ligand can bond to the central metal ion.


The rules for naming metal complexes are much the same as for other nomenclatures:

  1. Always name the cation before the anion.
  2. The ligands are named first, then the metal atom but all in one word.
  3. The ligands are preceded by a Greek prefix telling you how many of them there are. If you have more than one type of ligand, they are listed in alphabetical order, (ignoring the prefix).
    1. The prefixes are the standard di-, tri-, tetra-… for simple ligands but become bis-, tris-, tetrakis-…  for more complex ligands. As a general rule, if a ligand name already as a prefix in it, (for example ethylenediamine) it is “complex”.
  4. Anionic ligands end in –o, neutral ligands have the same name as the compound, (there are a few exceptions: NH3 = ammine; H2O = aqua).
  5. The metal atom generally has the normal name, but will end with –ate if it is an anion. Some atoms, however, use the old-fashioned names for the anion, (eg. Cu = cuprate; Au = aurate etc). If the symbol on the periodic table doesn’t match the modern name – give it its ye-olde name. You always give the oxidation state of the metal in roman numerals.

Some examples:

[Pt(NH3)Br2]Cl2: tetraamminedichloroplatinum(IV) chloride

Fe(CN)64-: hexacyanoferrate (II)

[Co(en)3]Cl3: tris(ethylenediamine)cobalt (III) chloride


Just like your favourite part of organic chemistry, metal complexes have isomers too! And they are nearly as much fun to name.

There are, as before, two main types of isomers.

Structural isomers deal with differences in the way the atoms are bonded together.

Ionisation isomers: where the ligands and the counterions are exchanged.

[Pt(NH3)4Cl2]Br2 vs [Pt(NH3)4Br2]Cl2

Coordination isomers: are where compounds containing complex anions and cations differ by the distribution of ligands.

[Co(en)3][Cr(CN)­6] vs [Cr(en)3][Co(CN)6]

Linkage isomers: are isomers where a ligand attaches via different atoms.

[Mn(CO)5(SCN)] vs [Mn(CO)5(NCS)]

Stereoisomers have the same bonds, but a different arrangement of them in space.

Geometric isomers: differ in the relative positions of the ligands in space.

If you have a pair of the same ligands, you can have cis- and trans- isomers:

Images courtesy of  Doc Brown (

If you have three, you can get fac- and mer- isomers:

Just to make your life difficult, there are also enantiomers, non-superimposable mirror images:


Leave a comment

Filed under Chem 2, Inorganic Chemistry

Thermodynamics and spontaneity

The first law of thermodynamics is basically the law of conservation of energy as relevant to thermodynamic systems. The change in internal energy of a system, (ΔU) is equal to the sum of the heat and work (q + w) of the system.
The second law of thermodynamics describes whether or not a change is spontaneous, expressing it in terms of entropy.

Entropy (S), is the thermodynamic quantity that describes the disorder, (randomness) in a system. The entropy is related to the number of states a molecule has available to it. A molecule at high temperature has more vibrational states available than one at a lower temperature, and therefore has a higher entropy. A crystal locks molecules into a certain configuration, whereas molecules in a gas are free to move about and therefore have higher entropy.

The second law of thermodynamics states that the total entropy of a system and its surroundings always increases for a spontaneous process. Generally, we refer to this is the entropy of the universe, as the sum of the entropies of the system and surroundings must increase, however one may decrease and the process may still be spontaneous.

ΔSuniverse = ΔSsystem +ΔSsurroundings

We can restate this law so it refers only to the system, and as heat flows into or out of the system, the entropy goes with it. So, at a certain temperature, the entropy is associated with heat q;

ΔS > q/T for a spontaneous process

Therefore, for a spontaneous process at a certain temperature, the change in the entropy must be greater than the heat divided by the absolute temperature. For systems that are at equilibrium, the entropy is equal to the heat over temperature.

Phase changes:

The entropy of a phase change is derived from the equation above.


Where the ΔH is the heat of the phase change, and the temperature at which the phase change occurs.


Looking at entropy and enthalpy, we can determine whether or not a process is spontaneous, and we introduce the concept of free energy (sometimes called Gibbs’ free energy, ΔG), which is equal to:


For a spontaneous process, ΔG ≤ ΔH – TΔS, (ie, negative). We want the TΔS term to be larger than the ΔH term, indicating that even if a reaction is endothermic, if ΔS is larger, the reaction will still proceed.


Some links:

Leave a comment

Filed under Chem 1, Physical Chemistry

Youtube Friday

Leave a comment

Filed under Youtube Friday