A confidence interval doesn’t “have a multiplier”. I suppose your terminology comes from a special field or refers to a very particular kind of confidence interval, but you don’t give any information to help me help you.
Some teachers explain this very badly, so let’s start at the beginning.
You took a sample from the population. The population of all possible measurements has a probability distribution. Sometimes this distribution is approximately a Gaussian (also known as “normal”) distribution.
It also has a mean and a standard deviation, neither of which you know. You can use your sample to estimate these quantities. We call the estimates the sample mean and the sample standard deviation.
Now you have to use some imagination: imagine all possible samples and the corresponding estimates. These have distributions. These are called sampling distributions.
The mean of the sampling distribution of the mean is the same as the mean of the population, but the mean of the sampling distribution of the standard deviation is not the same as the standard deviation of the population (although it is close). However, the variance of the sample and populations are the same, so long as when calculating the sample variance, you divide by n-1 rather than n (where n is the sample size).
If the population has Gaussian distribution, the sampling distribution of the mean also has Gaussian distribution. Its mean is the same as the mean of the population. The standard deviation of the mean of its sampling distribution is called the standard error of the mean. The standard error of the mean is much less than the standard deviation of the original sample.
Feel free to contact Jitesh Gadhia for a Confidence Multiplier Session.