File:Uusia koronavirustapauksia suomessa eksponentti ennuste 1.svg
Size of this PNG preview of this SVG file: 800 × 407 pixels. Other resolutions: 320 × 163 pixels | 640 × 325 pixels | 1,024 × 521 pixels | 1,280 × 651 pixels | 2,560 × 1,302 pixels | 1,178 × 599 pixels.
Original file (SVG file, nominally 1,178 × 599 pixels, file size: 396 KB)
File information
Structured data
Captions
Summary
editDescriptionUusia koronavirustapauksia suomessa eksponentti ennuste 1.svg |
Suomi: Ennuste - uusia koronavirustapauksia Suomessa telvella 2020/2021. Ennuste on tehty noin kuukauden tapausmäärien kohdalta, joista laskettu keskiarvoinen eksponentti, jonka pohjalta mahdolliset tulevat tapausmäärät on saatu. Ennusteen teko-ohjelma olettaa perusuusiutumisluvun R0 pysyvän vakiona kuukausia, mikä ei vastaa todellista tilannetta. |
Date | |
Source | Own work |
Author | Merikanto |
Python 3 code to produce image |
---|
##################################################################
##
## COVID-19 Finland cases short-term forecast
## Python 3 script
##
## Input from internet site: cases.
##
## version 0001.0008
## 7.8.2022
##
###################################################################
import math as math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import locale
from datetime import datetime, timedelta
import matplotlib.dates as mdates
from scipy import interpolate
import scipy.signal
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator, MaxNLocator)
from scipy.signal import savgol_filter
## ggplot
#from plotnine import *
from mizani.breaks import date_breaks, minor_breaks
from mizani.formatters import date_format
from bs4 import BeautifulSoup
import requests
import locale
from datetime import datetime, timedelta
import matplotlib.dates as mdates
import matplotlib.ticker as ticker
import matplotlib.dates as mdates
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
from matplotlib.ticker import MaxNLocator
from matplotlib.ticker import ScalarFormatter
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.tsa.arima.model import ARIMA
from numpy import fft
import statistics
from scipy.interpolate import UnivariateSpline
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import svm, metrics
from pygam import LogisticGAM
from pygam import LinearGAM
from sklearn.neighbors import KernelDensity
from scipy.interpolate import pchip_interpolate
from scipy import stats
from datetime import date
from scipy.optimize import curve_fit
##################################
#### asetukset parameters
## correction paremeter for y_hat
#y_coeff=1.05
y_coeff=2.25
## pohjadatan rajat: limits of base data
today = date.today()
#print("Today's date:", today)
daybeforeyesterday = datetime.today() - timedelta(days=2)
paiva1="2021-07-17"
paiva2="2021-08-01"
#paiva2=daybeforeyesterday
## ennusteen rajat: forecast limits
taika1=paiva2
taika2="2021-12-15"
## y-akselin rajat
ymax1=5000
ymax2=20
vormat1='%Y-%m-%d'
locale.setlocale(locale.LC_TIME, "fi_FI")
def func(x, a, b):
return (a*x+b)
def fourierExtrapolation(x, n_predict):
n = x.size
n_harm = 10 # number of harmonics in model
t = np.arange(0, n)
p = np.polyfit(t, x, 1) # find linear trend in x
x_notrend = x - p[0] * t # detrended x
x_freqdom = fft.fft(x_notrend) # detrended x in frequency domain
f = fft.fftfreq(n) # frequencies
indexes = range(n)
# sort indexes by frequency, lower -> higher
#indexes.sort(key = lambda i: np.absolute(f[i]))
t = np.arange(0, n + n_predict)
restored_sig = np.zeros(t.size)
for i in indexes[:1 + n_harm * 2]:
ampli = np.absolute(x_freqdom[i]) / n # amplitude
phase = np.angle(x_freqdom[i]) # phase
restored_sig += ampli * np.cos(2 * np.pi * f[i] * t + phase)
return restored_sig + p[0] * t
def cut_by_dates(dfx, start_date, end_date):
mask = (dfx['Date'] > start_date) & (dfx['Date'] <= end_date)
dfx2 = dfx.loc[mask]
#print(dfx2)
return(dfx2)
def load_country_cases(maa):
dfin = pd.read_csv('https://datahub.io/core/covid-19/r/countries-aggregated.csv', parse_dates=['Date'])
countries = [maa]
dfin = dfin[dfin['Country'].isin(countries)]
#print (head(dfin))
#quit(-1)
selected_columns = dfin[["Date", "Confirmed", "Recovered", "Deaths"]]
df2 = selected_columns.copy()
df=df2
len1=len(df["Date"])
aktiv2= [None] * len1
for n in range(0,len1-1):
aktiv2[n]=0
dates=df['Date']
rekov1=df['Recovered']
konf1=df['Confirmed']
death1=df['Deaths']
#print(dates)
spanni=6
#print(rekov1)
#quit(-1)
rulla = rekov1.rolling(window=spanni).mean()
rulla2 = rulla.rolling(window=spanni).mean()
tulosrulla=rulla2
tulosrulla= tulosrulla.replace(np.nan, 0)
tulosrulla=np.array(tulosrulla).astype(int)
rulla2=tulosrulla
x=np.linspace(0,len1,len1);
#print("kupla")
#print(tulosrulla)
#print(konf1)
#print(death1)
#print(aktiv2)
konf1=np.array(konf1).astype(int)
death1=np.array(death1).astype(int)
#print(konf1)
#quit(-1)
for n in range(0,(len1-1)):
#print("luzmu")
rulla2[n]=tulosrulla[n]
#print ("luzmu2")
#aktiv2[n]=konf1[n]-death1[n]-rulla2[n]
aktiv2[n]=konf1[n]
#print(rulla2[n])
#quit(-1)
#aktiv3=np.array(aktiv2).astype(int)
dailycases1= [0] * len1
dailydeaths1= [0] * len1
for n in range(1,(len1-1)):
dailycases1[n]=konf1[n]-konf1[n-1]
if (dailycases1[n]<0): dailycases1[n]=0
for n in range(1,(len1-1)):
dailydeaths1[n]=death1[n]-death1[n-1]
if (dailydeaths1[n]<0): dailydeaths1[n]=0
#quit(-1)
df.insert (2, "Daily_Cases", dailycases1)
df.insert (3, "Daily_Deaths", dailydeaths1)
df['ActiveEst']=aktiv2
#print (df)
dfout = df[['Date', 'Confirmed','Deaths','Recovered', 'ActiveEst','Daily_Cases','Daily_Deaths']]
#print(df)
#print(dfout)
print(".")
return(dfout)
def load_fin_wiki_data():
url="https://fi.wikipedia.org/wiki/Suomen_koronaviruspandemian_aikajana"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
table = soup.find_all('table')[0] # Grab the first table
df = pd.read_html(str(table))[0]
#print(df)
#Päivä Tapauksia Uusia tapauksia Sairaalassa Teholla Kuolleita Uusia kuolleita Toipuneita
df2 = df[['Tapauksia','Uusia tapauksia','Sairaalassa','Teholla','Kuolleita','Uusia kuolleita','Toipuneita']]
kaikkiatapauksia=df['Tapauksia']
toipuneita=df['Toipuneita']
uusiatapauksia=df['Uusia tapauksia']
sairaalassa=df['Sairaalassa']
teholla=df['Teholla']
kuolleita=df['Kuolleita']
uusiakuolleita=df['Uusia kuolleita']
len1=len(kaikkiatapauksia)
kaikkiatapauksia2=[]
toipuneita2=[]
uusiatapauksia2=[]
sairaalassa2=[]
teholla2=[]
kuolleita2=[]
uusiakuolleita2=[]
for n in range(0,len1):
elem0=kaikkiatapauksia[n]
elem1 = ''.join(c for c in elem0 if c.isdigit())
elem2=int(elem1)
kaikkiatapauksia2.append(elem2)
elem0=toipuneita[n]
elem1 = ''.join(c for c in elem0 if c.isdigit())
#toipuneita2.append(int(elem1))
if (elem1!=''): toipuneita2.append(int(elem1))
else: toipuneita2.append(0)
elem0=uusiatapauksia[n]
elem1 = ''.join(c for c in elem0 if c.isdigit())
uusiatapauksia2.append(int(elem1))
elem0=sairaalassa[n]
#elem1 = ''.join(c for c in elem0 if c.isdigit())
sairaalassa2.append(int(elem0))
elem0=teholla[n]
#elem1 = ''.join(c for c in elem0 if c.isdigit())
teholla2.append(int(elem0))
elem0=kuolleita[n]
#elem1 = ''.join(c for c in elem0 if c.isdigit())
kuolleita2.append(int(elem0))
elem0=uusiakuolleita[n]
#elem1 = ''.join(c for c in elem0 if c.isdigit())
uusiakuolleita2.append(int(elem0))
#kaikkiatapauksia3=np.array(kaikkiatapauksia2).astype(int)
#print("---")
#print(kaikkiatapauksia2)
#print(toipuneita2)
kaikkiatapauksia3=np.array(kaikkiatapauksia2).astype(int)
toipuneita3=np.array(toipuneita2).astype(int)
uusiatapauksia3=np.array(uusiatapauksia2).astype(int)
sairaalassa3=np.array(sairaalassa2).astype(int)
teholla3=np.array(teholla2).astype(int)
kuolleita3=np.array(kuolleita2) .astype(int)
uusiakuolleita3=np.array(uusiakuolleita2).astype(int)
for n in range(1,len1):
toipu0=toipuneita3[n]
if (toipu0==0):
paikka=n
toipui=toipu1
break
toipu1=toipu0
## oletus: toipuu suureksi osaksi akuutista vaiheesta 3 viikossa
## todellisuudessa yli 60% kärsii ainakin yhdestä pitkäkestoisesta oireesta
for n in range(paikka,len1):
toipui=toipui+uusiatapauksia3[n-21]-uusiakuolleita3[n]
toipuneita3[n]=toipui
#print(toipuneita3[n])
napapaiva1 = np.datetime64("2020-04-01")
timedelta1= np.timedelta64(len(kaikkiatapauksia3),'D')
napapaiva2 = napapaiva1+timedelta1
#dada1 = np.linspace(napapaiva1.astype('f8'), napapaiva2.astype('f8'), dtype='<M8[D]')
dada1 = pd.date_range(napapaiva1, napapaiva2, periods=len(kaikkiatapauksia3)).to_pydatetime()
#print(dada1)
data = {'Date':dada1,
'Kaikkia tapauksia':kaikkiatapauksia3,
"Uusia tapauksia":uusiatapauksia3,
"Sairaalassa":sairaalassa3,
"Teholla":teholla3,
"Kuolleita":kuolleita3,
"Uusiakuolleita":uusiakuolleita3,
"Toipuneita":toipuneita3
}
df2 = pd.DataFrame(data)
#print(kaikkiatapauksia3)
#print ("Fin wiki data.")
return(df2)
def get_solanpaa_fi_data():
url="https://covid19.solanpaa.fi/data/fin_cases.json"
response = requests.get(url,allow_redirects=True)
open('solanpaa_fi.json', 'w').write(response.text)
with open('solanpaa_fi.json') as f:
sola1=pd.read_json(f)
#sola1_top = sola1.head()
#print (sola1_top)
#Rt […]
#Rt_lower […]
#Rt_upper […]
#Rt_lower50 […]
#Rt_upper50 […]
#Rt_lower90 […]
#Rt_upper90 […]
#new_cases_uks […]
#new_cases_uks_lower50 […]
#new_cases_uks_upper50 […]
#new_cases_uks_lower90 […]
#new_cases_uks_upper90 […]
#new_cases_uks_lower […]
#new_cases_uks_upper […]
dada1=sola1["date"]
casa1=sola1["cases"]
death1=sola1["deaths"]
newcasa1=sola1["new_cases"]
newdeath1=sola1["new_deaths"]
hosp1=sola1["hospitalized"]
icu1=sola1["in_icu"]
rt=sola1["Rt"]
newcasauks=sola1["new_cases_uks"]
data = {'Date':dada1,
'Tapauksia':casa1,
'Kuolemia':death1,
'Sairaalassa':hosp1,
'Teholla':icu1,
'Uusia_tapauksia':newcasa1,
'Uusia_kuolemia':newdeath1,
'R':rt,
'Uusia_tapauksia_ennuste':newcasauks,
}
df = pd.DataFrame(data)
return(df)
def get_ecdc_fi_hospital_data():
url="https://opendata.ecdc.europa.eu/covid19/hospitalicuadmissionrates/json/"
response = requests.get(url,allow_redirects=True)
open('ecdc_hoic.json', 'w').write(response.text)
with open('ecdc_hoic.json') as f:
sola1=pd.read_json(f)
#print(sola1.head())
sola2=sola1.loc[sola1["country"]=='Finland']
#sola2.to_csv (r'ecdc_hospital_finland_origo.csv', index = True, header=True, sep=';')
#print(sola2.head())
dada0=sola2["date"]
hosp0=sola2["value"]
country0=sola2["country"]
len1=len(dada0)
len2=int(len1/2)
#print (len2)
dada1=dada0[1:len2-1]
hosp1=np.array(hosp0[1:len2-1])
icu1=np.array(hosp0[len2:len1])
#print(dada1)
print (icu1)
quit(-1)
data = {'Date':dada1,
'Sairaalassa':hosp1,
'Teholla':icu1
}
df = pd.DataFrame(data)
df.to_csv (r'ecdc_hospital_finland.csv', index = True, header=True, sep=';')
return df
def get_thl_fi_open_data():
## thl open data, 1.2.2021
url1="https://sampo.thl.fi/pivot/prod/fi/epirapo/covid19case/fact_epirapo_covid19case.json?row=measure-444833&column=dateweek20200101-508804L"
url2="https://sampo.thl.fi/pivot/prod/fi/epirapo/covid19case/fact_epirapo_covid19case.json?row=measure-492118&column=dateweek20200101-508804L"
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
response1 = requests.get(url1,headers=headers,allow_redirects=True)
open('thl_cases1.json', 'w').write(response1.text)
with open('thl_cases1.json') as json_file1:
data1 = json.load(json_file1)
#print(data1)
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
response2 = requests.get(url2,headers=headers,allow_redirects=True)
open('thl_deaths1.json', 'w').write(response2.text)
with open('thl_deaths1.json') as json_file2:
data2 = json.load(json_file2)
#print(data1)
k2=data1['dataset']
k3=k2['dimension']
k4=k3['dateweek20200101']
k5=k4['category']
k6=k5['label']
k8a=k6.keys()
k8b=k6.values()
d1=k2['value']
m2=data2['dataset']
m3=m2['dimension']
m4=m3['dateweek20200101']
m5=m4['category']
m6=m5['label']
m8a=m6.keys()
m8b=m6.values()
d2=m2['value']
#print (d1)
d1a=d1.keys()
d1b=d1.values()
d2a=d2.keys()
d2b=d2.values()
#print (k8b)
#print (d1a)
#print (d1b)
#print (d2a)
#print (d2b)
len1=len(k8b)
#print(len1)
#dates0=np.datetime64(np.array(list(k8b)))
dates0=list(k8b)
casekeys=np.array(list(d1a)).astype(int)
cases0=np.array(list(d1b)).astype(int)
deathkeys=np.array(list(d2a)).astype(int)
deaths0=np.array(list(d2b)).astype(int)
#print(dates0)
#print(casekeys)
#print(cases0)
kasetab1=np.empty(len1).astype(int)
kasetab1[casekeys]=cases0
deathtab1=np.empty(len1).astype(int)
deathtab1[deathkeys]=deaths0
#print (len(dates0))
#print (len(kasetab1))
datax = {'Date':dates0,
'Uusia_tapauksia':kasetab1,
'Uusia_kuolemia':deathtab1
}
df = pd.DataFrame(datax)
return(df)
def cut_country_data_by_current(dfx, start_date):
mask = (dfx['Date'] >= start_date)
dfx2 = dfx.loc[mask]
dfx2.drop(df.tail(1).index,inplace=True)
#print(dfx2)
return(dfx2)
#########################################################################
#########################################################################
##########################################################################
#df=load_country_cases('Finland')
#df2=cut_by_dates(df, paiva1,paiva2)
df=get_solanpaa_fi_data()
df2=cut_by_dates(df, paiva1,paiva2)
#print(df2)
#quit(-1)
#dates0=df2['Date']
#cases0=df2['Daily_Cases']
dates0=df2['Date']
#cases0=df2['Daily_Cases']
dailycases1=df2['Uusia_tapauksia']
#dailycases1=df2['Daily_Cases']
#dailydeaths1=df2['Uusiakuolleita']
cases0=dailycases1
cases=np.array(cases0).astype(int)
#dates=np.array(dates0).to_pydatetime()
dates=dates0
savgol=savgol_filter(cases, 13, 2)
casesminusavgol=cases-savgol
stdev=statistics.stdev(casesminusavgol)
mean=statistics.mean(savgol)
#print (mean)
#quit(-1)
start1 = np.datetime64(paiva1)
end1 = np.datetime64(paiva2)
start2 = np.datetime64(taika1)
end2 = np.datetime64(taika2)
#days1 = np.linspace(start1.astype('f8'), end1.astype('f8'), dtype='<M8[D]')
#napapaiva1 = np.datetime64("2020-04-01")
timedelta1= np.timedelta64(len(cases),'D')
#napapaiva2 = napapaiva1+timedelta1
#dada1 = np.linspace(napapaiva1.astype('f8'), napapaiva2.astype('f8'), dtype='<M8[D]')
days1 = pd.date_range(start1, end1, periods=len(cases)).to_pydatetime()
days2 = np.linspace(start2.astype('f8'), end2.astype('f8'), dtype='<M8[D]')
days3 = np.linspace(start1.astype('f8'), end2.astype('f8'), dtype='<M8[D]')
len1=len(days1)
len2=len(days2)
len3=len(days3)
x1 = np.linspace(0, len1, len1)
x2 = np.linspace(len1, len2, len2)
x3 = np.linspace(0, len3, len3)
lenx3=len(x3)
#print (dates)
#print (cases)
#print (len(dates))
#print (len(cases))
#quit(-1)
#print(len1)
#print(len(days1))
#print(len(cases))
#print(len(savgol1))
#print(len2)
#print(len3)
#plt.plot(x3, y_hat)
#plt.plot(x1, cases)
#plt.plot(days1, cases)
#plt.show()
#quit (-1)
#mu, sigma = 0, 0.1 # mean and standard deviation
## generate pseudo case data for different forecast scenerio
mu=mean
sigma=stdev
musavgol=savgol_filter(cases,13, 2)
s = np.random.normal(musavgol, sigma, len1)
savgol=savgol_filter(s,13, 2)
yhat = pchip_interpolate(x1, savgol, x3)
#plt.plot(x1, s)
#plt.plot(x3,yhat, color="#ffcfcf", alpha=0.25)
#plt.plot(x3,yhat, color="#ffcfcf", alpha=0.25)
#plt.plot(days3,yhat, color="#ffcfcf", alpha=0.25)
#plt.show()
#print("----")
#print(musavgol)
#print(sigma)
#print(s)
#quit(-1)
for n in range(0,50):
nn=n+9
#s = np.random.normal(mu, sigma, len1)
s = np.random.normal(musavgol, sigma, len1)
savgol=savgol_filter(s,13, 2)
yhat = pchip_interpolate(x1, savgol, x3)
yhat=yhat*y_coeff
#plt.plot(x1, s)
#plt.plot(x3,yhat, color="#ffcfcf", alpha=0.25)
plt.plot(days3,yhat, color="#ffcfff", alpha=0.25)
#plt.show()
#quit(-1)
mu=mean
sigma=stdev/3
musavgol=savgol_filter(cases,13, 2)
for n in range(0,150):
nn=n+9
#s = np.random.normal(mu, sigma, len1)
s = np.random.normal(musavgol, sigma, len1)
savgol=savgol_filter(s,13, 2)
yhat = pchip_interpolate(x1, savgol, x3)
yhat=yhat*y_coeff
#plt.plot(x1, s)
#plt.plot(x3,yhat, color="#ffafaf", alpha=0.25)
plt.plot(days3,yhat, color="#ffcfff", alpha=0.25)
mu=mean
sigma=stdev/9
musavgol=savgol_filter(cases,13, 2)
for n in range(0,150):
nn=n+9
#s = np.random.normal(mu, sigma, len1)
s = np.random.normal(musavgol, sigma, len1)
savgol=savgol_filter(s,13, 2)
yhat = pchip_interpolate(x1, savgol, x3)
yhat=yhat*y_coeff
#plt.plot(x1, s)
#plt.plot(x3,yhat, color="#af7f7f", alpha=0.25)
plt.plot(days3,yhat, color="#ffcfff", alpha=0.25)
dateformat1 = mdates.DateFormatter('%d.%m')
plt.xticks(fontsize=16)
plt.yticks(fontsize=16, rotation=0)
ax1=plt.gca()
plt.title("Koronavirustapauksia - eksponentti-ennuste", fontsize=18)
ax1.set_xlabel('Pvm', color='g',size=18)
ax1.set_ylabel('Koronatapauksia /pv', color='#800000',size=18)
ax1.xaxis.set_major_formatter(dateformat1)
plt.ylim(0, ymax1)
y=np.array(cases)
y=y.astype(float)
y_1=cases
y=y.astype(float)
#yhat1 = savgol_filter(y, 51, 3)
#y=yhat1
## be sure there are no zeros
yeka=y
y[y==0] = 1
y=np.log(y)
#x2 = np.linspace(1, 12, 50)
#ci = 0.95
ci=0.75
pp = (1. + ci) / 2.
nstd = stats.norm.ppf(pp)
#print (nstd)
#print (x)
#print (y)
popt, pcov = curve_fit(func, x1, y)
perr = np.sqrt(np.diag(pcov))
popt_up = popt + nstd * perr
popt_dw = popt - nstd * perr
y_01=np.exp(func(x1, *popt))
y_02=np.exp(func(x1, *popt_up))
y_03=np.exp(func(x1, *popt_dw))
y_1=np.exp(func(x3, *popt))
y_2=np.exp(func(x3, *popt_up))
y_3=np.exp(func(x3, *popt_dw))
dy_1=y_2-y_1 ## yka
dy_2=y_1-y_3 ## ala
ysavgol=savgol_filter(y, 13, 2)
#ysavgol=y
ycasesminusavgol=y-ysavgol
ystdev=statistics.stdev(ycasesminusavgol)
mean=statistics.mean(ysavgol)
#print (ystdev)
y_1=y_1*y_coeff
ssdev1=y_1*ystdev
yy_1=y_1+ssdev1
ya_1=y_1-ssdev1
yy_2=y_1+(ssdev1*2)
ya_2=y_1-(ssdev1*2)
yy_3=y_1+(ssdev1/2)
ya_3=y_1-(ssdev1/2)
yy_4=y_1+(ssdev1/4)
ya_4=y_1-(ssdev1/4)
plt.fill_between(days3, yy_2, ya_2, fc='#ffafaf', alpha=0.5)
plt.fill_between(days3, yy_1, ya_1, fc='#ff9f9f', alpha=0.5)
plt.fill_between(days3, yy_3, ya_3, fc='#ff9f9f', alpha=0.5)
plt.fill_between(days3, yy_4, ya_4, fc='#ff7f7f', alpha=0.5)
plt.plot(days3, y_1, c='#00007f', lw=3.)
#plt.plot(days3, y)
plt.scatter(days1, yeka, c='k')
plt.show()
print("Tone.")
|
Licensing
editI, the copyright holder of this work, hereby publish it under the following license:
This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
- You are free:
- to share – to copy, distribute and transmit the work
- to remix – to adapt the work
- Under the following conditions:
- attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 06:47, 7 August 2022 | 1,178 × 599 (396 KB) | Merikanto (talk | contribs) | Update | |
18:41, 13 April 2022 | 947 × 548 (552 KB) | Merikanto (talk | contribs) | update | ||
10:17, 8 December 2021 | 1,006 × 432 (534 KB) | Merikanto (talk | contribs) | update | ||
13:01, 5 August 2021 | 889 × 455 (421 KB) | Merikanto (talk | contribs) | Update | ||
05:58, 19 July 2021 | 873 × 489 (510 KB) | Merikanto (talk | contribs) | Update | ||
12:15, 13 July 2021 | 707 × 484 (546 KB) | Merikanto (talk | contribs) | Update | ||
07:09, 29 June 2021 | 686 × 383 (461 KB) | Merikanto (talk | contribs) | update | ||
11:03, 5 June 2021 | 979 × 508 (405 KB) | Merikanto (talk | contribs) | Upload | ||
10:45, 5 June 2021 | 878 × 419 (416 KB) | Merikanto (talk | contribs) | Upload | ||
12:31, 13 May 2021 | 804 × 374 (484 KB) | Merikanto (talk | contribs) | Update |
You cannot overwrite this file.
File usage on Commons
There are no pages that use this file.
Metadata
This file contains additional information such as Exif metadata which may have been added by the digital camera, scanner, or software program used to create or digitize it. If the file has been modified from its original state, some details such as the timestamp may not fully reflect those of the original file. The timestamp is only as accurate as the clock in the camera, and it may be completely wrong.
Width | 942.48pt |
---|---|
Height | 479.52pt |