A History of the Ministry of Information, 1939-46

12 10

PART 3. CRITICAL COMMENTS

15. Various estimates of the number of applications for the Defence Medal

The Home office estimated that from four to seven million civilians were qualified. They did not officially estimate the most probable number of applications, but a total of three millions was envisaged as possible. On the results of the survey reported here, the Social Survey estimated that the number might range from half a million to three millions, and gave one and a half to two millions as the most probable number. Answers to questions 15 and 16 show that among the civilian population the most popular estimate was just over two millions, but that estimates given by different individuals ranged from under a quarter of a million to over ten millions. In a survey conducted in May 1946 (Survey No. 133) the British Institute of Public Opinion obtained figures which yield an estimate of the number of applications already initiated by that date as over four millions.

On May 1, 1947, the following figures were given of the number of applications received by the Home Office to that date:

Table 9
Source Number of Applications
Police 147,269
National Fire Service 50,625
Local Authorities Civil Defence, total 220,715
Government Departments ditto, total 48,735
Others 16,960
Total 484,304

Applications were still being received at the rate of about 500 a week - but to raise the total to half a million that rate would need to be sustained for another 32 weeks.

Although this survey may have helped to reduce the amount of uncertainty which prevailed on this subject, the information it provided was inaccurate. The only extenuations that can be claimed for it are that it was less inaccurate than the other information available at the same time, and that its general tendency was to reduce the amount of concern about the possibility of an overwhelming number of applications which was felt then, and as subsequent events proved, felt unnecessarily.

Some attempt to discover the source or sources of so large an error must now be made. The main possible sources are taken in turn, and the amount of error likely to be traceable to each is considered.

13 11 14 12 15 13

16. Possible sources of error in the sampling procedure

Each interviewer was given a quota defining the total of people to be interviewed, and specifying how many of them were to be of a particular sex and in a particular occupation, and how many were to be housewives of specified ages. This method of sampling has since been abandoned in favour of more rigorous methods, but as long as interviewers adhered to the instructions they received there was no reason for fearing that any particular bias would be introduced into the results. An overestimate is no more to be expected than an underestimate. The margin of error which should be allowed for the estimate may need to be enlarged for such reasons, but the estimate finally obtained should still remain the best obtainable from the date.

Within the limits of the quotas, however, interviewers were free to choose whom to interview, and some may have shown a bias in their choice. For instance, when asking for permission to interview people at their places of work, some interviewers may have occasionally found themselves pressed to interview employees with special qualifications for the Defence Medal.

Table 10 shows the number of people seen by each interviewer in each region, and what proportion of them answered ‘Yes’ to question 1: “Have you seen or heard anything about the Defence Medal?”

TABLE 10
Code number of Total number of people interviewed Per cent answering ‘Yes’ to Qn. 1
Region Interviewer
1 5 52 42
... 6 45 24
2 9 77 25
... 10 45 64
... 11 102 54
3 14 24 38
... 17 90 56
... 18 63 29
4 35 76 53
... 36 73 40
5 4 41 29
... 10 40 75
... 14 30 77
... 22 24 50
... 24 46 30
... 28 19 26
... 32 131 49
... 33 55 38
6 23 73 76
... 24 56 52
... 26 5 100
7 26 70 26
... 27 47 34
... 35 16 56
... 36 11 64
8 30 72 69
... 31 74 76
9 18 15 27
... 19 72 47
... 20 64 28
... 21 87 74
10 10 30 50
... 13 76 62
... 14 50 22
... 15 70 33
... 16 1 0
... 33 67 40
11 1 90 31
... 2 81 55
... 3 90 41
12 22 76 50
... 28 43 33
all all 2,369 47

The variation shown between the interviewers is very wide. Unfortunately, as different interviewers worked in different regions and had differently composed quotas, the variations cannot be traced to one cause. They are not unquestionably attributable to individual bias.

Giving interviewers the benefit of every possible doubt we may enquire: Even if the maximum allowance is made for differences between the regions, are not the differences between interviewers working in the same region still too great to be attributable to chance? The analysis in Table 11 provides an answer:

Table 11
Source of Variance Sum of Squares Degrees of Freedom Mean Square
a . Total variance of answers to Qn. 1 590.49 2,368 ...
b . Variance between regions 23.55 11 2.14
c . Variance between interviewer in the same region. 43.89 30 1.46
d . Residual variance 523.05 2,327 .22

For test of significance of variance from source c :

F = 6.5 P < .01

It shows that there are significant differences between the interviewers which cannot be attributed to the fact that they worked in different regions.

Going now to the other extreme, we may enquire: Are there any real differences between the regions, or are apparent differences solely due to the fact that different interviewers were employed in different regions? This possibility can be examined because some of the interviewers did part of their quota in one region and part in another. The alternative analysis of the same data given in Table 12 provides an answer

Table 12
Source of Variance Sum of Squares Degree of Freedom Mean square
a . Total variance of answers to Qn. 1 590.49 2,368 ....
b . Variance between interviewers 56.36 29 1.94
c . Variance between regions covered by the same interviewer 11.08 12 .92
d . Residual variance 523.05 2,327 .22

For test of significance of variance from source c :

F = 4.1 P < .01

It shows that when interviewers change the region where they are working, the proportion of ‘Yes’ answers they report to Qn. 1 also changes to an extent too great to be attributable to chance. Therefore there appear to be significant differences between regions. The rates, observed in different regions are shown in Table 13.

Table 13
Region
Code No. Area Total number of people interviewed Per cent answering ‘Yes’ to Qn. 1
1 Northern 97 34
2 North Eastern 224 46
3 North Midlands 177 44
4 Eastern 149 46
5 London 386 47
6 Southern 134 72
7 South Western 144 35
8 Wales 146 72
9 Midlands 238 50
10 North Western 294 42
11 Scotland 261 42
12 South Eastern 119 44
Total 2,369 47

While these figures may suggest that the error of the final estimate, that 47 per cent of the population sampled had heard of the Defence Medal, may be rather larger than the standard error of such a percentage, they do not furnish any adequate grounds for supposing that any other estimate, whether higher or lower, is to be preferred. Nor do they suggest that the margin of error is very much greater. The residual variance left after both sources of variance, between interviewers and between regions, has been deducted, still amounts to 89 per cent of the total. Only 11 per cent can be attributed to such sources. This is quite insufficient to account for the fact that the number of applications estimated was approximately three and a half times as many as the number received.

Although errors may have occurred in the sampling procedure, no grounds have been established for attributing any substantial part of the excess to them

17. Possible influence of administrative processes

These two numbers need not agree exactly. One, given in the report, is an estimate of the number of applications expected to be initiated; the other, given by the Home Office, is a count of the applications received. The processes through which applications pass after initiation may reduce the number finally received. D.M.2s. have to be forwarded through a certifying authority, and D.M.3s. and 4s. have to undergo supplementary processes as well. According to the figures in Table 6, out of 224 people who believed themselves qualified, 73 appeared to lack adequate qualifications. It is possible therefore that for every 100 applications finally reaching the Home Office, some 48 more may have been initiated but not certified. If so, the total of 484,304 applications received may be the survivors from a total of some 733 thousand claims initiated. It is even conceivable that some valid claims have gone astray; for, as was pointed out in section 1, the procedure provides the applicant with no safeguard against this risk.

All influences of this kind tend in the same direction - towards reducing the number of applications received. Some part of the excess may therefore be attributed to such causes, but they cannot be considered responsible for the whole, or even for the main part of the excess.

16 14 17 15 18 16

18. Possible sources of error in the wording of the questions and in the design of the enquiry

The main source of error appears to be in the amount of reliance placed on statements made by informants.

Some people who have said, Yes, I intend to obtain an application from, may not carry out their intention. Others, who have said, No, may change their mind and obtain a form after all. But it is noteworthy that these erroneous statements do not cancel one another out as according to sampling theory they should. We have no theoretical grounds for expecting more people to change their minds in one direction than in the other. But conditions prevailing at the time of the survey must have created a bias in favour of changes of one kind particularly.

If comparable evidence could be collected from surveys carried out under different conditions, useful principles might be discovered for anticipating occurrences of this kind. No improvement in estimation can be obtained means can be found for assessing the direction and the strength of the bias created by given conditions. It is regrettable that in the design of this enquiry no provision was made for obtaining assessments of this kind. Provisions that would have been possible are described below.

The strength and direction of the bias varies in different categories of service. In section 5 the total number of applications was estimated at about 1,750,000; and the proportion in several categories of service was estimated in section 6. The last column but one in Table 14 shows the number of claims that would thus be expected in the categories listed by the Home Office (c.f. Table 9).

Table 14

Excess of the number of applications estimated over the number observed, by category

Category Description Number of Applications (in thousands) Ratio of Excess (e - o) ( o )
Observed (=o) Estimated (=e)
22 - 27 Police 147 84 - .43
21 N.F.S. 51 96 + .89
3 - 16 Local Authorities 220 994 + 3.51
17 Government Depts. 49 420 + 7.61
rest Others 17 156 + 8.19
all Total 484 1,750 + 2.61

NOTE: For testing significance, the frequencies given in Table 9 have to be compared with the following frequencies found among the 146 cases referred to in section 6: Police 7, N.F.S. 8, Local Authorities 83, Govt. Depts. 35, Others 13. The comparison yields the following values of chi-squared: Police 31.50, N.F.S. 3.45, Local Authorities 4.07, Govt. Depts. 28.06, Others 12.16 (1 d.f. each); total 79.24 (4 d.f.)

The ratio of excess shows the strength and direction of the change. Amongst the police the number of applications estimated is actually less than the number observed. While it is tempting to infer that policemen's statements about their intentions are exceptionally reliable, erring, if at all, on the side of caution, there are other explanations which unfortunately cannot be excluded. The finding may be connected with the fact that the police have remained in uniform whereas most of the other services have been disbanded. If the ratio of actual to potential claimants in the forces could be ascertained, it might be useful for comparison.

The estimate of the total number of applications given in section 5 is built up from the numbers of people in the sample giving particular answers to several different questions, as shown in Table 2. Evidence on the relative reliability of the answers to the different questions may be worth reviewing.

Hardly anyone is likely to have reported having completed an application form and sent it in, without having actually done so; and most people who have said, it is completed but not sent in, are likely to have despatched it later on - although no evidence is available on these two points. But if such statements are reliable, they alone would account for a total of some 623 thousand claims. This must lead us suppose that only some 110 thousand claims were finally initiated by all the people (estimated at 2,514 thousand) who had obtained but not completed their forms, or who intended to obtain them. We must therefore conclude that very little reliance can be placed on statements to this effect - especially on answers of ‘Yes’ to Qn. 5: “Do you intend to obtain an application form?”

The 162 people in the sample who gave such an answer can be divided according to their subsequent answers to Qns. 6 and 7 into

69 (42%) who were able to say when they intended to get their forms and knew that they should go to a Post Office for them,

35 (22%) who could say when, but did not know where to go,

27 (17%) who knew where, but could not say when,

31 (1%) who knew neither when nor where.

Some people's intentions were thus more fully formed and may perhaps be treated with greater confidence than others’. Further questions of this kind might provide a graduated scale for measuring the extent to which people's intentions are formed: for the variation is partly one of degree: intentions which are formed in one respect tend to be formed in the other also (the four-point correlation is .19 and is significant).

But refinements of this kind would not suffice to indicate what proportion of such intentions will be put into effect. For this reason possible defects in the wording of the questions are not open to the suspicion of being an important source of error. In the present state of our knowledge, this proportion can only be discovered by experiment: e.g., by revisiting the informants after a given lapse of time and asking them whether they. had carried out their stated intentions in the meanwhile. A multiple regression equation showing the relationships between the extent to which the intentions had been formed, the amount of time which had elapsed and the proportion of intentions fulfilled might provide the means of estimating what is the proportion to be expected after any given lapse of time. If such a regression equation were to be prepared, information recorded on other parts of the schedule (e.g. classificatory data) might be found worth taking into account.

Possibly, however, a sufficiently accurate estimate might be obtained by a simpler procedure not involving interviewing anyone more than once. From a series of surveys, made at regular intervals of time, the rate of increase in the proportion of informants who had already completed their applications could be measured. This might also be related to any changes found in the attitudes of the remaining informants. Thus the maximum number of applications to be expected might be estimated. Whether the total number of interviews required for the whole series would need to be greater than the total for the present survey is a question which would need special consideration.

Further evidence on the reliability of people's intentions is provided by a comparison between answers to Qn. 5 and Qn. 17. Table 15 shows a break-down of the number of people who answered both questions.

Table 15
Answers to Question 5: Answers to Question 17
Yes Uncertain No Total
Yes 110 4 4 118
Uncertain 19 49 7 75
Total 129 53 11 193

These two questions are very similar in effect. Qn. 5 is: “Do you intend to obtain an application form?” Qn. 17 is: “Will you complete an application form?” During the course of the interview between these two question, informants were given an opportunity of examining the forms.

Since they are so similar, the correlation between the answers to these two questions can be treated as a measure of their reliability. It is .633(NOTE 3). But the informants counted in this table are a selected fraction of the total sample interviewed: in particular, no one who answered ‘No’ to Qn. 5 was asked Qn. 17. The break-down to be expected if everyone who answered Qn. 5 had been asked Qn. 17 is shown in Table 16.

Table 16
Answers to Question 5: Answers to Question 17
Yes Uncertain No Total
Yes 151 6 5 162
Uncertain 32 84 12 128
No 23 91 180 294
Total 206 181 197 584

The correction for homogeneity (NOTE 4) raises the correlation to .844.

Tables 15 and 16 exhibit an interesting finding. Although changes in people's intentions show a bias against obtaining or completing application forms, although the complexities of the forms act as a mild deterrent, and although greater efforts are needed to give effect to affirmative answers to Qn. 17 than to Qn. 5, more people answer ‘Yes’ to Qn. 17. It seems difficult to escape from the conclusion that the course of the interview has exercised an influence on the opinions of the informants. It has led them to change their opinions in the contrary direction to the prevailing trend discovered from other evidence among the general population. The interviews were not simple fact-finding processes; they produced effects, modifying the attitudes of the informants.

NOTE 3. Product moment correlation using centroids (c.f. Person, Tables for Statisticians and Biometricians, Vol. II, p. xxii).

NOTE 4. C.f. Thomson, G. Factorial Analysis of Human Ability (esp. chaps. On the influence of selection).

We use cookies to track usage and preferences.

Privacy & Cookie Policy Accept & Close