Saizeriya

Little that I know, the Saizeriya is a Japanese chain of Italian family-style restaurant which has a few branches in Singapore. It is a wallet-friendly restaurant which serves a fusion food of both Asian and Western food.

It has been a while since the last visit. They have something new for this visit, the chili crab spaghetti. It tasted so-so only. It would be better you order a real chili crab and take the gravy with spaghetti for best satisfaction.

Another type of spaghetti I always order whenever I visited their branch at Aperia Mall is vongole spicy tomato soup, the spicy tomato clam spaghetti in other words. It is tasty, spicy sweet soupy type of spaghetti.

I think other than having spaghetti, the rest of the food is not quite worth as the portion is quite small. If you wish, you can order the set which comes with a bowl of salad and drink which you can self-service at their drink section. Otherwise, plain water is free flow.

Advertisements

Easy Noodle Bar

It is located at Foch Road, Singapore, closes to Jalan Besar, an old district in Singapore. Now, it is accessible via the Bendemeer MRT (Downtown Line). This road features the old and small shop lots and Easy Noodle Bar is Japanese restaurant which is located in between the famous pig organ soup and Vietnamese restaurant, Lang Nuong Vietnam.

I found this restaurant through an introduction made by a fellow Instagram friend and I brought two other colleagues to try this restaurant for one of the dinner together.

Their menu is simple and easy as mentioned by its restaurant name. The selection of food is not many but good enough to give some tries.

They took some pictures of the food and put into a photo album to display on the table, it is not all the best shots, but they are good enough to give you some idea of their food, otherwise, you can ask the waitress to explain them to you before you order.

During my first visit, I tried plenty of good ones.

Nikujaga
My favorite of all the dishes we ordered. A Japanese beef stewed with vegetables and the stock was sweet and tasty. According to Sethlui.com, they use grain-fed Angus beef which is marinated with shoulder, mirin and sake. The beef was tender and easily melted in my mouth.

Recommended dish, the Yang Chun noodle. A simple noodle comes with good soup based, I think, else my colleague may not be trying to finish up the soup. Generally, it gives some homely feel while eating it and it is meatless, except a soft boiled egg.

We tried Tori Nanban, fried chicken cutlet. Crisp on the outside, juicy on the inside and dip with their sauce. Perfect match.

The Tempura prawn was great too when it was served hot and the breadcrumb did not look oily. It was used fresh prawns and the meat texture was crunchy.

Lastly, the Chicken Roulade, minced meat rolled inside a chicken breast I think. Everything was meaty. It comes in two portion, nicely grilled, still tender and delicious.

It is recommended to try their lunch set too because their price is quite reasonable and the portion is just right to allow us to try many different dishes.

The ambiance at night is a little bit dark and under light, however, it is still comfortable to enjoy the dinner together.

Address: 20 Foch Rd, Singapore 209261

Tonkotsu Kazan Ramen

It is famous with its tagline, “Volcano Ramen”.

I think this is one of the best ramen restaurant in Singapore, at least, I saw a few food bloggers have been writing about it and shared their experience having the “volcano” ramen.

For the first time, I tried it at JEM, Jurong East, Singapore’s branch with my colleagues during the lunch time. The lunch crowd during my visit was just normal, no waiting time was required and the waitress was readily to serve us.

Their menu has quite extensive selection of ramen with the normal Shio and Shoyu are in the list and other specialty ramen to be chosen if you want to try something different especially the advertised “volcano” ramen.

Myself, for the first time visit, I go for its original recipe, I ordered the Shoyu based soup with ramen, char siew, pickles, half of the onsen egg and beansprouts.

I really have a question about the ramen sold in any restaurants in Singapore. How come if I choose to order a charshu ramen, it always just come with a piece of meat only? It is in a rare case it will have more than one unless, we choose to have more meat.

I wanted to say it is a basic ramen which we can find it anywhere, I mean the noodle itself is really basic, no thick no thin, and definitely it does not look like handmade. The level of springiness of the noodle I cannot really remember.

The pork broth is thick, rich and slightly salty to my taste but I think it is just right for anyone else. Just with a piece of meat, I did not feel full and have to almost finish up the soup to make my stomach. It is not a good idea at all.

For other colleagues who ordered the Ishiyaki Ramen or Hot Stone Roast Ramen which is the “volcano” ramen, there is a set of instructions to follow before start eating the ramen. While waiting for their noodle to be done, the table was served with free flow hard boiled eggs, just similar to another ramen restaurant I visited before. I, myself took one.

This is the scene of preparing the “volcano” ramen where the free flow broth is served too. The waitress poured the broth on to the bowl of ramen before covered it with a cone shaped lid. While the hot stone is hot, it will “cook” the ramen.

It has a timer for them to cook until it is best served and the bowl of noodle is good to serve for two persons unless you are a very big eater!

Address: JEM, 50 Jurong Gateway Rd, #03 – 09, Singapore 608549.

Black Ink

It has been a while since my last food review. I think it needs some balance between food and travel blogs and my learning journey in Python. Of course, I do not want to make my readers feel boring about programming language and miss out the good food in Singapore!

A throwback event back in September 2018 when I received a voucher from the On Coffee Makers which has been promoting coffee culture in Singapore over the weeks through the Facebook Page.

I made a random choice out of the list given and I picked Black Ink because it is at the East side. It is convenient for me to travel nearby than far for a cup of coffee. And, this was a free coffee!

It took me a while to walk from the Bedok MRT station to the cafe. If you know where this cafe is located, then you will not ask why I chose to walk.

It is located at Swan Lake Ave, inside the Opera Estate, 2 bus stops away from Bedok Interchange. I think there is no bus stop inside this area. Most people here are staying in the terrace houses.

I was served by a girl at the counter when I saw her the coupon via my Facebook messenger. She did not aware about the coupon and asked me to wait for her supervisor.

The owner of the cafe came and he could not assist further as the coupon was not yet allowed to be used at his cafe even though, it has stated its start date. He did not hesitate to honor the voucher for me and asked what I would like to have.

The coupon was meant for coffee only and I chose my favourite latte. If you want to change to use soy milk, additional cost is applied. And ya, soy milk is cheaper than milk I thought.

I do not want to look cheapskate so I decided to order the butter croissant from the menu. They adopted the eatsy mobile application to allow customers to order food from their mobile phone and make payment cashlessly and they were having promotion or cash rebate at that time. So, why not!

Here you go, my latte and croissant were served shortly after that. The croissant is normal but crispy, flakes all over the plate, and the latte tastes just right, like any other specialty cafe.

However, the atmosphere there is good, it is quite quiet neighbourhood and the cafe is not crowded even I went there during weekend. It is a good place for study and small gathering. I think quite a number of neighbours visiting this cafe during the weekend as the friendly owner talks with them, just like they are often meeting up there.

I did not try other food from this cafe and reviews from the Internet looks good. Probably, I will consider to try here again next time when I want to organize a little study group.

Address: 29 Swan Lake Ave, Singapore 455718.

Using Python for Streaming Data with Iterators

Using pandas read_csv iterator for streaming data

Next, the tutorial uses the Pandas read_csv() function and chunksize argument to read by chunk of the same dataset, World Bank World Development Indicator.

Another way to read data too large to store in memory in chunks is to read the file in as DataFrames of a certain length (using the chunksize).

First and foremost, import the pandas’ package, then use the .read_csv() function which creates an interable reader object. Then, it can use next() method on it to print the value. Refer below for the sample code.

# Import the pandas package
import pandas as pd

# Initialize reader object: df_reader
df_reader = pd.read_csv('ind_pop.csv', chunksize=10)

# Print two chunks
print(next(df_reader))
print(next(df_reader))

The output of the above code which processes the data by chunk size of 10.

The next 10 records will be from index 10 to 19.

Next, the tutorials require to create another DataFrame composed of only the rows from a specific country. Then, zip together two of the columns from the new DataFrame, ‘Total Population’ and ‘Urban population (% of total)’. Finally, create a list of tuples from the zip object, where each tuple is composed of a value from each of the two columns mentioned.

Sound a bit complicated now… Let see the sample code below:

# Initialize reader object: urb_pop_reader
urb_pop_reader = pd.read_csv('ind_pop_data.csv', chunksize=1000)

# Get the first DataFrame chunk: df_urb_pop
df_urb_pop = next(urb_pop_reader)

# Check out the head of the DataFrame
print(df_urb_pop.head())

# Check out specific country: df_pop_ceb
df_pop_ceb = df_urb_pop.loc[df_urb_pop['CountryCode'] == 'CEB']

# Zip DataFrame columns of interest: pops
pops = zip(df_pop_ceb['Total Population'], df_pop_ceb['Urban population (% of total)'])

# Turn zip object into list: pops_list
pops_list = list(pops)

# Print pops_list
print(pops_list)

My output looks as below:

Now, it requires to plot a scatter plot. The below source code contains the previous exercise’s code from the DataCamp itself. Therefore, there is a different of method used in this line,

# Check out specific country: df_pop_ceb
df_pop_ceb = df_urb_pop.loc[df_urb_pop['CountryCode'] == 'CEB']

#From DataCamp
df_pop_ceb = df_urb_pop[df_urb_pop['CountryCode'] == 'CEB']

It requires to use list comprehension to create a new DataFrame. The values in this new DataFrame column is ‘Total Urban Population’, therefore, the product of the first and second element in each tuple.

Furthermore, because the 2nd element is a percentage, it needs to divide the entire result by 100, or alternatively, multiply it by 0.01.

Then, using the Matplotlib’s package, plot the scatter plot with new column ‘Total Urban Population’ and ‘Year’. It is quite a lot of stuff combined together up to this point. See the code and result of the plot shown below.

# Code from previous exercise
urb_pop_reader = pd.read_csv('ind_pop_data.csv', chunksize=1000)
df_urb_pop = next(urb_pop_reader)
df_pop_ceb = df_urb_pop[df_urb_pop['CountryCode'] == 'CEB']
pops = zip(df_pop_ceb['Total Population'], 
           df_pop_ceb['Urban population (% of total)'])
pops_list = list(pops)

# Use list comprehension to create new DataFrame column 'Total Urban Population'
df_pop_ceb['Total Urban Population'] = [int(entry[0] * entry[1] * 0.01) for entry in pops_list]

# Plot urban population data
df_pop_ceb.plot(kind='scatter', x='Year', y='Total Urban Population')
plt.show()

I realized that the ‘Year’ is not in integer format. I printed out the value of the column ‘Year’ and it looked perfectly fine. Do you know what is wrong with my code above?

This time, you will aggregate the results over all the DataFrame chunks in the dataset. This basically means you will be processing the entire dataset now. This is neat because you’re going to be able to process the entire large dataset by just working on smaller pieces of it!

Below sample code consists of some of DataCamp’s code, so some variable names have changed to use their variable names. Here, it requires to append the DataFrame chunk into a variable called ‘data’ and plot the scatter plot.

# Initialize reader object: urb_pop_reader
urb_pop_reader = pd.read_csv('ind_pop_data.csv', chunksize=1000)

# Initialize empty DataFrame: data
data = pd.DataFrame()

# Iterate over each DataFrame chunk
for df_urb_pop in urb_pop_reader:

    # Check out specific country: df_pop_ceb
    df_pop_ceb = df_urb_pop[df_urb_pop['CountryCode'] == 'CEB']

    # Zip DataFrame columns of interest: pops
    pops = zip(df_pop_ceb['Total Population'],
                df_pop_ceb['Urban population (% of total)'])

    # Turn zip object into list: pops_list
    pops_list = list(pops)

    # Use list comprehension to create new DataFrame column 'Total Urban Population'
    df_pop_ceb['Total Urban Population'] = [int(tup[0] * tup[1] * 0.01) for tup in pops_list]
    
    # Append DataFrame chunk to data: data
    data = data.append(df_pop_ceb)

# Plot urban population data
data.plot(kind='scatter', x='Year', y='Total Urban Population')
plt.show()

I tried to compare the lines of codes used by DataCamp and mine.

# Use list comprehension to create new DataFrame column 'Total Urban Population'
df_pop_ceb['Total Urban Population'] = [int(entry[0] * entry[1] * 0.01) for entry in pops_list]

# Plot urban population data
df_pop_ceb.plot(kind='scatter', x='Year', y='Total Urban Population')
plt.show()

# Use list comprehension to create new DataFrame column 'Total Urban Population'
    df_pop_ceb['Total Urban Population'] = [int(tup[0] * tup[1] * 0.01) for tup in pops_list]
    
    # Append DataFrame chunk to data: data
    data = data.append(df_pop_ceb)

# Plot urban population data
data.plot(kind='scatter', x='Year', y='Total Urban Population')
plt.show()

Both of the df_pop_ceb look the same, so how come the scatter plots show differently. I still cannot figure out why my scatter plot is showing ‘Year’ with decimal point.

Lastly, wrap up the tutorial by creating an user defined function and taking two parameters, the filename and the country code to do all the above. The source code and another scatter plot is shown as below:

# Define plot_pop()
def plot_pop(filename, country_code):

    # Initialize reader object: urb_pop_reader
    urb_pop_reader = pd.read_csv(filename, chunksize=1000)

    # Initialize empty DataFrame: data
    data = pd.DataFrame()
    
    # Iterate over each DataFrame chunk
    for df_urb_pop in urb_pop_reader:
        # Check out specific country: df_pop_ceb
        df_pop_ceb = df_urb_pop[df_urb_pop['CountryCode'] == country_code]

        # Zip DataFrame columns of interest: pops
        pops = zip(df_pop_ceb['Total Population'],
                    df_pop_ceb['Urban population (% of total)'])

        # Turn zip object into list: pops_list
        pops_list = list(pops)

        # Use list comprehension to create new DataFrame column 'Total Urban Population'
        df_pop_ceb['Total Urban Population'] = [int(tup[0] * tup[1] * 0.01) for tup in pops_list]
    
        # Append DataFrame chunk to data: data
        data = data.append(df_pop_ceb)

    # Plot urban population data
    data.plot(kind='scatter', x='Year', y='Total Urban Population')
    plt.show()

# Set the filename: fn
fn = 'ind_pop_data.csv'

# Call plot_pop for country code 'CEB'
plot_pop('ind_pop_data.csv', 'CEB')

# Call plot_pop for country code 'ARB'
plot_pop('ind_pop_data.csv', 'ARB')

The scatter plot of the country code ‘ARB’ is shown below.

Summary of the day:

  • Use of user defined functions with parameters.
  • Iterators and list comprehensions.
  • Pandas’ DataFrame.
  • Matplotlib scatter plot.

Using Python for Streaming Data with Generators

Continuous from the same dataset from the World Bank World Development Indicator. Before this, I wrote about using the iterators to load data chunk by chunk and then, using the generators to load data or file line by line.

Generators work for streaming data where the data is written line by line from time to time. Generators able to read and process the data until it reaches end of file or no lines to process. Sometimes, data sources can be so large in size that storing the entire dataset in memory becomes too resource-intensive.

In this exercise from DataCamp’s tutorials, I process the first 1000 rows of a file line by line, to create a dictionary of the counts of how many times each country appears in a column in the dataset. Below is some details about the dataset and how to import the dataset.

To begin, I need to open a connection to a file using what is known as a context manager. For example, the command,

with open(‘datacamp.csv’) as datacamp

binds the csv file ‘datacamp.csv’ as datacamp in the context manager.

Here, the with statement is the context manager, and its purpose is to ensure that resources are efficiently allocated when opening a connection to a file.

The sample code below uses the .readline() method to read a line from the file object. It then, can split the line into a list using .split() method.

# Open a connection to the file
with open('world_dev_ind.csv') as file:

    # Skip the column names
    file.readline()

    # Initialize an empty dictionary: counts_dict
    counts_dict = {}

    # Process only the first 1000 rows
    for j in range(1000):

        # Split the current line into a list: line
        line = file.readline().split(',')

        # Get the value for the first column: first_col
        first_col = line[0]

        # If the column value is in the dict, increment its value
        if first_col in counts_dict.keys():
            counts_dict[first_col] += 1

        # Else, add to the dict and set value to 1
        else:
            counts_dict[first_col] = 1

# Print the resulting dictionary
print(counts_dict)

The output looks as below:

{‘Arab World’: 80, ‘Caribbean small states’: 77, ‘Central Europe and the Baltics’: 71, ‘East Asia & Pacific (all income levels)’: 122, ‘East Asia & Pacific (developing only)’: 123, ‘Euro area’: 119, ‘Europe & Central Asia (all income levels)’: 109, ‘Europe & Central Asia (developing only)’: 89, ‘European Union’: 116, ‘Fragile and conflict affected situations’: 76, ‘Heavily indebted poor countries (HIPC)’: 18}

Use generator to load data
Generators allow users to lazily evaluate data. This concept of lazy evaluation is useful when you have to deal with very large datasets because it lets you generate values in an efficient manner by yielding only chunks of data at a time instead of the whole thing at once.

The tutorial requires to define a generator function read_large_file()that produces a generator object which yields a single line from a file each time next() is called on it. 

# Define read_large_file()
def read_large_file(file_object):
    """A generator function to read a large file lazily."""

    # Loop indefinitely until the end of the file
    while True:

        # Read a line from the file: data
        data = file_object.readline()

        # Break if this is the end of the file
        if not data:
            break

        # Yield the line of data
        yield data
        
# Open a connection to the file
with open('world_dev_ind.csv') as file:

    # Create a generator object for the file: gen_file
    gen_file = read_large_file(file)

    # Print the first three lines of the file
    print(next(gen_file))
    print(next(gen_file))
    print(next(gen_file))

Next, use the generator function created above to call a file line by line, create a dictionary of count of how many times each country appears in a column in the dataset and process all the rows in the file and print out the result. See the sample code below:

# Initialize an empty dictionary: counts_dict
counts_dict = {}

# Open a connection to the file
with open('world_dev_ind.csv') as file:

    # Iterate over the generator from read_large_file()
    for line in read_large_file(file):

        row = line.split(',')
        first_col = row[0]

        if first_col in counts_dict.keys():
            counts_dict[first_col] += 1
        else:
            counts_dict[first_col] = 1

# Print            
print(counts_dict)

And, the output looks as below:

{‘CountryName’: 1, ‘Arab World’: 80, ‘Caribbean small states’: 77, ‘Central Europe and the Baltics’: 71, ‘East Asia & Pacific (all income levels)’: 122, ‘East Asia & Pacific (developing only)’: 123, ‘Euro area’: 119, ‘Europe & Central Asia (all income levels)’: 109, ‘Europe & Central Asia (developing only)’: 89, ‘European Union’: 116, ‘Fragile and conflict affected situations’: 76, ‘Heavily indebted poor countries (HIPC)’: 99, ‘High income’: 131, ‘High income: nonOECD’: 68, ‘High income: OECD’: 127, ‘Latin America & Caribbean (all income levels)’: 130, ‘Latin America & Caribbean (developing only)’: 133, ‘Least developed countries: UN classification’: 78, ‘Low & middle income’: 138, ‘Low income’: 80, ‘Lower middle income’: 126, ‘Middle East & North Africa (all income levels)’: 89, ‘Middle East & North Africa (developing only)’: 94, ‘Middle income’: 138, ‘North America’: 123, ‘OECD members’: 130, ‘Other small states’: 63, ‘Pacific island small states’: 66, ‘Small states’: 69, ‘South Asia’: 36}

Obviously, it shows more data than before this because it does not limit to 1000 rows.

Summary of the data:

  • Generators for streaming data.
  • Context manager, open a file connection.
  • Use generator function to load data.
  • Use a dictionary to store result.

World Bank World Development Indicator Case Study with Python

In the DataCamp’s tutorials, Python Data Science Toolbox (Part 2), it combines user defined functions, iterators, list comprehensions and generators to wrangle and extract meaningful information from the real-world case study.

It is going to use World Bank’s dataset. The tutorials will use all which I have learned recently to work around with this dataset.

Dictionaries for Data Science

The zip() function to combine two lists into a zip object and convert it into a dictionary.

Before I share the sample code, let me share again what the zip() function does.

Using zip()
It allows us to stitch together any arbitrary number of iterables. In other words, it is zipping them together to create a zip object which is an iterator of tuple.

# Zip lists: zipped_lists
zipped_lists = zip(feature_names, row_vals)

# Create a dictionary: rs_dict
rs_dict = dict(zipped_lists)

# Print the dictionary
print(rs_dict)

# Output: {'CountryName': 'Arab World', 'CountryCode': 'ARB', 'IndicatorName': 'Adolescent fertility rate (births per 1,000 women ages 15-19)', 'IndicatorCode': 'SP.ADO.TFRT', 'Year': '1960', 'Value': '133.56090740552298'}

Next, the tutorial wants us to create an user defined function with two parameters. I can re-use the above code and add an user defined function and call it with passing two arguments, feature_names and row_vals.

# Define lists2dict()
def lists2dict(list1, list2):
    """Return a dictionary where list1 provides
    the keys and list2 provides the values."""

    # Zip lists: zipped_lists
    zipped_lists = zip(list1, list2)

    # Create a dictionary: rs_dict
    rs_dict = dict(zipped_lists)

    # Return the dictionary
    return rs_dict

# Call lists2dict: rs_fxn
rs_fxn = lists2dict(feature_names, row_vals)

# Print rs_fxn
print(rs_fxn)

It should give the same result when run the codes. Next, tutorial requires me to use list comprehension. It requires to turn a bunch of lists into a list of dictionaries with the help of a list comprehension, where the keys are the header names and the values are the row entries.

The syntax,
[[output expression] for iterator variable in iterable]

The question in the tutorial is,
Create a list comprehension that generates a dictionary using lists2dict() for each sublist in row_lists. The keys are from the feature_names list and the values are the row entries in row_lists. Use sublist as your iterator variable and assign the resulting list of dictionaries to list_of_dicts.

The code on the screen before I coded. As above, sublist is the iterator variable, so it is substituted in between “for” and “in” keyword. The instruction says,
for each sublist in row_lists

indirectly, it means,

# Turn list of lists into list of dicts: list_of_dicts
list_of_dicts = [--- for sublist in row_lists]

The lists2dict() function which I created above returns a dictionary. The question says,
generates a dictionary using lists2dict()

indirectly it means calling the lists2dict() function at the output expression. But if I code,

# Turn list of lists into list of dicts: list_of_dicts
list_of_dicts = [lists2dict(feature_names, row_lists) for sublist in row_lists]

The output was very wrong and when I clicked the “Submit” button, it prompted me error message,
Check your call of lists2dict(). Did you correctly specify the second argument? Expected sublist, but got row_lists.

It expected sublist and yes, the for loop is reading each list in the row_lists. I have a code to print each list,
print(row_lists[0])

It is more meaningful to use sublist as 2nd argument rather than using row_lists. Therefore, the final code is,

# Print the first two lists in row_lists
print(row_lists[0])
print(row_lists[1])

# Turn list of lists into list of dicts: list_of_dicts
list_of_dicts = [lists2dict(feature_names, sublist) for sublist in row_lists]

# Print the first two dictionaries in list_of_dicts
print(list_of_dicts[0])
print(list_of_dicts[1])
#Output:
{'CountryName': 'Arab World', 'CountryCode': 'ARB', 'IndicatorName': 'Adolescent fertility rate (births per 1,000 women ages 15-19)', 'IndicatorCode': 'SP.ADO.TFRT', 'Year': '1960', 'Value': '133.56090740552298'}
{'CountryName': 'Arab World', 'CountryCode': 'ARB', 'IndicatorName': 'Age dependency ratio (% of working-age population)', 'IndicatorCode': 'SP.POP.DPND', 'Year': '1960', 'Value': '87.7976011532547'}

The above code is really taking my time to find out what I should code and why it did not get a right code. That did not stop me from continuing my tutorial.

Turning it into a DataFrame
Up to here, this case study I did a zip() function, put it into an user defined function and used the new created function in list comprehensions to generate a list of dictionaries.

Next, the tutorial wants to convert the list of dictionaries into Pandas’ DataFrame. First and foremost, I need to import the Pandas package. Let refer to the code below:

# Import the pandas package
import pandas as pd

# Turn list of lists into list of dicts: list_of_dicts
list_of_dicts = [lists2dict(feature_names, sublist) for sublist in row_lists]

# Turn list of dicts into a DataFrame: df
df = pd.DataFrame(list_of_dicts)

# Print the head of the DataFrame
print(df.head())

Summary of the day:

  • zip() function combines the lists into a zip object.
  • Use user defined function in list comprehension.
  • Convert list comprehension into a DataFrame.