Alle Beiträge von David

Scientific literature: Manage & Read Everywhere

Multiple tools exist for managing and reading scientific publications with open source and standards in mind; only Zotero, JabRef and others remain.

Manage and read everywhere related to PC/Tablet/Phone, but for an easy solution with Zotero, you need a paid subscription if you exceed 300MB of online storage, which I do.

Scientific literature: Manage & Read Everywhere weiterlesen

MapReduce for Education

MapReduce represents a pattern that had a huge impact on the data analysis and big data community.  Apache Hadoop allows to scatter and scale data processing with the number of nodes and cores.

One of the many corner points in this full framework is that code is shipped and executed on-site where the data resides. Next, only a pre-processed transformed version (map) of the data is then shuffled and sorted to the aggregators on different executors via the network.

MapReduce is hard to use on its own, so it usually is deployed with
Apache Hadoop or Apache Spark. To play around with it without either one of those large frameworks, I created one in Python – MapReduceSlim. It emulates all core features of the MapReduce. It has one difference, it loads each line of the files separately into the map function. In the case of Apache Hadoop, it would be block-wise. This provides a nice solution to understand the behavior and the pattern of MapReduce and how to implement a mapper and reducer.

Classic WordCount Example

Mapper function

# Hint: in MapReduce with Hadoop Streaming the 
# input comes from standard input STDIN
def wc_mapper(key: str, values: str):
    # remove leading and trailing whitespaces
    line = values.strip()
    # split the line into words
    words = line.split()
    for word in words:
        # write the results to standard 
        # output STDOUT
        yield word, 1

Reducer function

def wc_reducer(key: str, values: list):
    current_count = 0
    word = key
    for value in values:
        current_count += value
    yield word, current_count

Finally, call the function with the MapReduceSlim framework

# Import the slim framework
from map_reduce_slim import MapReduceSlim, wc_mapper, wc_reduce

### One input file version
# Read the content from one file and use the 
# content as input for the run.
MapReduceSlim('davinci.txt', 'davinci_wc_result_one_file.txt', wc_mapper, wc_reducer)

### Directory input version
# Read all files in the given directory and 
# use the content as input for the run.
MapReduceSlim('davinci_split', 'davinci_wc_result_multiple_file.txt', wc_mapper, wc_reducer)

Further information @ Github: https://github.com/2er0/MapReduceSlim

 

R (rlang) and Plots everywhere

Data science and Jupyter notebook can sometimes get exhausting. What about debugging, version control, code reviewing and so on. Coming from a Software Engineering background it‘s like losing 50% of the stuff you were used to.

To mitigate those problems I recently partially switched from Python to R with many improvements. For local Python coding, JetBrains PyCharm is my tool of choice and Jupyter notebooks for remote coding. With R it is RStudio Desktop and for remote, there is RStudio Server, which is almost like the desktop version within a browser. This allows one to develop and analyze data from any device with a browser.

R-Studio Server with running R-Notebook sample

R (rlang) and Plots everywhere weiterlesen

Base Installation of Arch Linux + Good to Know

Install

If you want to install Arch, everyone tells you that you should read the installation guide. The second thing you may hear is that you should read the installation guide and that you have to follow it step by step. That also has a short name RTFM – Read The Fucking Manual – and stick to it – no joke.

Make backups before installing Arch Linux. 😉

Base Installation of Arch Linux + Good to Know weiterlesen

Why Arch Linux?

My History

For four years now, I used Manjaro as my main GNU/Linux distribution for my daily use. That includes developing with Java/C++/Python and data analysis stuff with R/Python.

Now for me, it was time to switch from Manjaro to another distribution. Sidenote: Manjaro uses Arch Linux as base distribution but provides a considerable amount of additional services out
of the box. Manjaro was running fine for four years now with only one incident, with the integrated WWAN modem.

Since I started to use Manjaro, I fell in love with the “rolling release” feature with an up-to-date kernel and all the up-to-date packages. I decided that it is time to switch to plain Arch Linux for me now.

Why Arch Linux? weiterlesen