I improved the fruit repository.]]>
I added new entries of my sources.list.d directory.]]>
I added a file to configure mpv.]]>
I now save the parquet files with a more expressive name.]]>
The solution with write_dataset is better when working with very large files. I write multiple parquet files and I have no ram issues at all.]]>
I added a new code showing how to handle a big csv file, giving it an automatic schema and filter the results.]]>
I now ensure that I remove final_spain.csv before I generate it.]]>
I added another file to carry out some diagnostics on TAM.]]>
I made a mistake with the saving of a compressed csv file.]]>
I modified the obfuscated data set.]]>
I save the data directly as a compressed csv file.]]>
I changed the way in which I handle the date.]]>
I changed the file names and I now use write_csv_arrow to save the data as a csv.]]>
I now can choose to skip the first part of the data processing once it has been done for all.]]>
I now use write_parquet instead of write_dataset to save the results. I am not interested in a multifile solution.]]>
I changed the name of the input file.]]>
I remove two columns I do not need any longer.]]>
I modified the code because of modifications in the Slovenian input files.]]>
I cleaned up the code and I also generate a parquet file.]]>
A better way to handle the data: I process individually each page file retrieved from the Spanish state aid site.]]>
A new code to process the Spanish data obtained via the API.]]>
Minor modifications.]]>
A simple script to query the tam.]]>
I fixed some typos.]]>
I also create the railway dataset.]]>
Code to retrieve the Spanish State aid data via their API.]]>
I simply replaced magritte pipe with the native pipe.]]>
I fixed a function to calculate the lags.]]>
I added new indicators.]]>
I now save also time-stamped versions of the portfolio compositions as pdf files.]]>