\n"
+ ]
+ }
+ ],
+ "source": [
+ "def func(**losers):\n",
+ " print(losers)\n",
+ " print(losers['a'])\n",
+ " print(type(losers))\n",
+ " \n",
+ "func(a='Edsel', b='Betamax', c='mGaetz')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This works, but it's kinda annoying because you have to use strings for the keys, so some normal Python dictionaries will give you an error. {1:'Edsel', 2:'Betamax'} fails."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Edsel\n"
+ ]
+ }
+ ],
+ "source": [
+ "def func(a, b, c):\n",
+ " print(a)\n",
+ "\n",
+ "losers = {'a':'Edsel', 'b':'Betamax', 'c':'mGaetz'}\n",
+ "func(**losers)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/Web Data Mining/Python BeautifulSoup Web Scraping Tutorial.ipynb b/Web Data Mining/Python BeautifulSoup Web Scraping Tutorial.ipynb
new file mode 100644
index 00000000..f7d55aa9
--- /dev/null
+++ b/Web Data Mining/Python BeautifulSoup Web Scraping Tutorial.ipynb
@@ -0,0 +1,514 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python BeautifulSoup Web Scraping Tutorial\n",
+ "Learn to scrape data from the web using the Python BeautifulSoup bs4 library. \n",
+ "BeautifulSoup makes it easy to parse useful data out of an HTML page. \n",
+ "First install the bs4 library on your system by running at the command line, \n",
+ "*pip install beautifulsoup4* or *easy_install beautifulsoup4* (or bs4) \n",
+ "See [BeautifulSoup official documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) for the complete set of functions.\n",
+ "\n",
+ "### Import requests so we can fetch the html content of the webpage\n",
+ "You can see our example page has about 28k characters."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 61,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "28556\n"
+ ]
+ }
+ ],
+ "source": [
+ "import requests\n",
+ "r = requests.get('https://www.usclimatedata.com/climate/united-states/us')\n",
+ "print(len(r.text))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Import BeautifulSoup, and convert your HTML into a bs4 object\n",
+ "Now we can access specific HTML tags on the page using dot, just like a JSON object."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Climate United States - normals and averages\n",
+ "Climate United States - normals and averages\n"
+ ]
+ }
+ ],
+ "source": [
+ "from bs4 import BeautifulSoup\n",
+ "soup = BeautifulSoup(r.text)\n",
+ "print(soup.title)\n",
+ "print(soup.title.string)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Drill into the bs4 object to access page contents\n",
+ "soup.p will give you the contents of the first paragraph tag on the page. \n",
+ "soup.a gives you anchors / links on the page. \n",
+ "Get contents of an attribute inside an HTML tag using square brackets and perentheses. \n",
+ "Use .parent to get the parent object, and .next_sibling to get the next peer object. \n",
+ "**Use your browser's *inspect element* feature to find the tag for the data you want.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "You are here: United States
\n",
+ "You are here: United States\n",
+ "\n",
+ "
\n",
+ "\n",
+ "US Climate Data on Facebook\n",
+ "\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(soup.p)\n",
+ "print(soup.p.text)\n",
+ "print(soup.a)\n",
+ "print(soup.a['title'])\n",
+ "print()\n",
+ "print(soup.p.parent)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Prettify() is handy for formatted printing \n",
+ "but note this works only on bs4 objects, not on strings, dicts or lists. For those you need to import pprint."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ " - \n",
+ " \n",
+ " Monthly\n",
+ " \n",
+ "
\n",
+ " \n",
+ " \n",
+ " You are here:\n",
+ " \n",
+ " United States\n",
+ " \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(soup.p.parent.prettify())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### We need all the state links on this page\n",
+ "First we find_all anchor tags, and print out the href attribute, which is the actual link url. \n",
+ "But we see the result includes some links we don't want, so we need to filter those out."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "https://www.facebook.com/yourweatherservice/n",
+ "https://twitter.com/usclimatedata/n",
+ "http://www.usclimatedata.com/n",
+ "/climate/united-states/us\n",
+ "#summary\n",
+ "/climate/united-states/us\n",
+ "#\n",
+ "#\n",
+ "/climate/alabama/united-states/3170\n",
+ "/climate/kentucky/united-states/3187\n",
+ "/climate/north-dakota/united-states/3204\n",
+ "/climate/alaska/united-states/3171\n",
+ "/climate/louisiana/united-states/3188\n",
+ "/climate/ohio/united-states/3205\n",
+ "/climate/arizona/united-states/3172\n",
+ "/climate/maine/united-states/3189\n",
+ "/climate/oklahoma/united-states/3206\n",
+ "/climate/arkansas/united-states/3173\n",
+ "/climate/maryland/united-states/1872\n",
+ "/climate/oregon/united-states/3207\n",
+ "/climate/california/united-states/3174\n",
+ "/climate/massachusetts/united-states/3191\n",
+ "/climate/pennsylvania/united-states/3208\n",
+ "/climate/colorado/united-states/3175\n",
+ "/climate/michigan/united-states/3192\n",
+ "/climate/rhode-island/united-states/3209\n",
+ "/climate/connecticut/united-states/3176\n",
+ "/climate/minnesota/united-states/3193\n",
+ "/climate/south-carolina/united-states/3210\n",
+ "/climate/delaware/united-states/3177\n",
+ "/climate/mississippi/united-states/3194\n",
+ "/climate/south-dakota/united-states/3211\n",
+ "/climate/district-of-columbia/united-states/3178\n",
+ "/climate/missouri/united-states/3195\n",
+ "/climate/tennessee/united-states/3212\n",
+ "/climate/florida/united-states/3179\n",
+ "/climate/montana/united-states/919\n",
+ "/climate/texas/united-states/3213\n",
+ "/climate/georgia/united-states/3180\n",
+ "/climate/nebraska/united-states/3197\n",
+ "/climate/utah/united-states/3214\n",
+ "/climate/hawaii/united-states/3181\n",
+ "/climate/nevada/united-states/3198\n",
+ "/climate/vermont/united-states/3215\n",
+ "/climate/idaho/united-states/3182\n",
+ "/climate/new-hampshire/united-states/3199\n",
+ "/climate/virginia/united-states/3216\n",
+ "/climate/illinois/united-states/3183\n",
+ "/climate/new-jersey/united-states/3200\n",
+ "/climate/washington/united-states/3217\n",
+ "/climate/indiana/united-states/3184\n",
+ "/climate/new-mexico/united-states/3201\n",
+ "/climate/west-virginia/united-states/3218\n",
+ "/climate/iowa/united-states/3185\n",
+ "/climate/new-york/united-states/3202\n",
+ "/climate/wisconsin/united-states/3219\n",
+ "/climate/kansas/united-states/3186\n",
+ "/climate/north-carolina/united-states/3203\n",
+ "/climate/wyoming/united-states/3220\n",
+ "https://www.yourweatherservice.com/n",
+ "https://www.climatedata.eu/n",
+ "https://www.weernetwerk.nl/n",
+ "/about-us.php\n",
+ "/disclaimer.php\n",
+ "/cookies.php\n"
+ ]
+ }
+ ],
+ "source": [
+ "for link in soup.find_all('a'):\n",
+ " print(link.get('href'))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Filter urls using string functions\n",
+ "We just add an *if* to check conditions, then add the good ones to a list. \n",
+ "In the end we get 51 state links, including Washington DC."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "51\n"
+ ]
+ }
+ ],
+ "source": [
+ "base_url = 'https://www.usclimatedata.com/'\n",
+ "state_links = []\n",
+ "for link in soup.find_all('a'):\n",
+ " url = link.get('href')\n",
+ " if url and '/climate/' in url and '/climate/united-states/us' not in url:\n",
+ " state_links.append(url)\n",
+ "print(len(state_links))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Test getting the data for one state\n",
+ "then print the title for that page."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Climate Ohio - temperature, rainfall and average\n"
+ ]
+ }
+ ],
+ "source": [
+ "r = requests.get(base_url + state_links[5])\n",
+ "soup = BeautifulSoup(r.text)\n",
+ "print(soup.title.string)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### The data we need is in *tr* tags\n",
+ "But look, there are 58 tr tags on the page, and we only want 2 of them - the *Average high* rows."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 37,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "58\n"
+ ]
+ }
+ ],
+ "source": [
+ "rows = soup.find_all('tr')\n",
+ "print(len(rows))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Filter rows, and add temp data to a list\n",
+ "We use a list comprehension to filter the rows. \n",
+ "Then we have only 2 rows left. \n",
+ "We iterate through those 2 rows, and add all the temps from data cells (td) into a list."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "2\n",
+ "['36', '40', '52', '63', '73', '82', '85', '84', '77', '65', '52', '41']\n"
+ ]
+ }
+ ],
+ "source": [
+ "rows = [row for row in rows if 'Average high' in str(row)]\n",
+ "print(len(rows))\n",
+ "\n",
+ "high_temps = []\n",
+ "for row in rows:\n",
+ " tds = row.find_all('td')\n",
+ " for i in range(1,7):\n",
+ " high_temps.append(tds[i].text)\n",
+ "print(high_temps)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Get the name of the State\n",
+ "First attempt we just split the title string into a list, and grab the second word. \n",
+ "But that doesn't work for 2-word states like New York and North Carolina. \n",
+ "So instead we slice the string from first blank to the hyphen. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Wyoming\n",
+ "Wyoming\n"
+ ]
+ }
+ ],
+ "source": [
+ "state = soup.title.string.split()[1]\n",
+ "print(state)\n",
+ "s = soup.title.string\n",
+ "state = s[s.find(' '):s.find('-')].strip()\n",
+ "print(state)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Add state name and temp list to the data dictionary\n",
+ "For a single state, this is what our scraped data looks like. \n",
+ "In this example we only got monthly highs by state, but you could drill into cities, and could get lows and precipitation. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'Ohio': ['36', '40', '52', '63', '73', '82', '85', '84', '77', '65', '52', '41']}\n"
+ ]
+ }
+ ],
+ "source": [
+ "data = {}\n",
+ "data[state] = high_temps\n",
+ "print(data)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Put it all together and iterate 51 states\n",
+ "We loop through our 51-state list, and get high temp data for each state, and add it to the data dict. \n",
+ "This combines all our work above into a single for loop. \n",
+ "The result is a dict with 51 states and a list of monthly highs for each."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 59,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'Alabama': ['57', '62', '70', '77', '84', '90', '92', '92', '87', '78', '69', '60'], 'Kentucky': ['40', '45', '55', '66', '75', '83', '87', '86', '79', '68', '55', '44'], 'North Dakota': ['23', '28', '40', '57', '68', '77', '85', '83', '72', '58', '40', '26'], 'Alaska': ['23', '27', '34', '44', '56', '63', '65', '64', '55', '40', '28', '25'], 'Louisiana': ['62', '65', '72', '78', '85', '89', '91', '91', '87', '80', '72', '64'], 'Ohio': ['36', '40', '52', '63', '73', '82', '85', '84', '77', '65', '52', '41'], 'Arizona': ['67', '71', '77', '85', '95', '104', '106', '104', '100', '89', '76', '66'], 'Maine': ['28', '32', '40', '53', '65', '74', '79', '78', '70', '57', '45', '33'], 'Oklahoma': ['50', '55', '63', '72', '80', '88', '94', '93', '85', '73', '62', '51'], 'Arkansas': ['51', '55', '64', '73', '81', '89', '92', '93', '86', '75', '63', '52'], 'Maryland': ['42', '46', '54', '65', '75', '85', '89', '87', '80', '68', '58', '46'], 'Oregon': ['48', '52', '56', '61', '68', '74', '82', '82', '77', '64', '53', '46'], 'California': ['54', '60', '65', '71', '80', '87', '92', '91', '87', '78', '64', '54'], 'Massachusetts': ['36', '39', '45', '56', '66', '76', '81', '80', '72', '61', '51', '41'], 'Pennsylvania': ['40', '44', '53', '64', '74', '83', '87', '85', '78', '67', '56', '45'], 'Colorado': ['45', '46', '54', '61', '72', '82', '90', '88', '79', '66', '52', '45'], 'Michigan': ['30', '33', '44', '58', '69', '78', '82', '80', '73', '60', '47', '34'], 'Rhode Island': ['37', '40', '48', '59', '68', '78', '83', '81', '74', '63', '53', '42'], 'Connecticut': ['37', '40', '47', '58', '68', '77', '82', '81', '74', '63', '53', '42'], 'Minnesota': ['26', '31', '43', '58', '71', '80', '85', '82', '73', '59', '42', '29'], 'South Carolina': ['56', '60', '68', '76', '84', '90', '93', '91', '85', '76', '67', '58'], 'Delaware': ['43', '47', '55', '66', '75', '83', '87', '85', '79', '69', '58', '47'], 'Mississippi': ['56', '60', '69', '76', '83', '89', '92', '92', '87', '77', '67', '58'], 'South Dakota': ['22', '27', '39', '57', '69', '78', '84', '82', '72', '58', '39', '25'], 'District of Columbia': ['42', '44', '53', '64', '75', '83', '87', '84', '78', '67', '55', '45'], 'Missouri': ['40', '45', '56', '67', '75', '83', '88', '88', '80', '69', '56', '43'], 'Tennessee': ['50', '55', '64', '73', '81', '89', '92', '91', '85', '74', '63', '52'], 'Florida': ['64', '67', '74', '80', '87', '91', '92', '92', '88', '81', '73', '65'], 'Montana': ['33', '39', '48', '58', '67', '76', '86', '85', '73', '59', '43', '32'], 'Texas': ['62', '65', '72', '80', '87', '92', '96', '97', '91', '82', '71', '63'], 'Georgia': ['52', '57', '64', '72', '81', '86', '90', '88', '82', '73', '64', '54'], 'Nebraska': ['32', '37', '50', '63', '73', '84', '88', '86', '77', '64', '48', '36'], 'Utah': ['38', '44', '53', '61', '71', '82', '90', '89', '78', '65', '50', '40'], 'Hawaii': ['80', '80', '81', '83', '85', '87', '88', '89', '89', '87', '84', '81'], 'Nevada': ['45', '50', '57', '63', '71', '81', '90', '88', '80', '68', '54', '45'], 'Vermont': ['27', '31', '40', '55', '67', '76', '81', '79', '70', '57', '46', '33'], 'Idaho': ['38', '45', '55', '62', '72', '81', '91', '90', '79', '65', '48', '38'], 'New Hampshire': ['31', '35', '44', '57', '69', '77', '82', '81', '73', '60', '48', '36'], 'Virginia': ['47', '51', '60', '70', '78', '86', '90', '88', '81', '71', '61', '51'], 'Illinois': ['32', '36', '46', '59', '70', '81', '84', '82', '75', '63', '48', '36'], 'New Jersey': ['39', '42', '51', '62', '72', '82', '86', '84', '77', '65', '55', '44'], 'Washington': ['47', '50', '54', '58', '65', '70', '76', '76', '71', '60', '51', '46'], 'Indiana': ['35', '40', '51', '63', '73', '82', '85', '83', '77', '65', '52', '39'], 'New Mexico': ['44', '48', '56', '65', '74', '83', '86', '83', '78', '67', '53', '43'], 'West Virginia': ['42', '47', '56', '68', '75', '82', '85', '84', '78', '68', '57', '46'], 'Iowa': ['31', '36', '49', '62', '72', '82', '86', '84', '76', '63', '48', '34'], 'New York': ['39', '42', '50', '60', '71', '79', '85', '83', '76', '65', '54', '44'], 'Wisconsin': ['29', '33', '42', '54', '65', '75', '80', '78', '71', '59', '46', '33'], 'Kansas': ['40', '45', '56', '67', '76', '85', '89', '89', '80', '68', '55', '42'], 'North Carolina': ['51', '55', '63', '72', '79', '86', '89', '87', '81', '72', '62', '53'], 'Wyoming': ['40', '40', '47', '55', '65', '75', '83', '81', '72', '59', '47', '38']}\n"
+ ]
+ }
+ ],
+ "source": [
+ "data = {}\n",
+ "for state_link in state_links:\n",
+ " url = base_url + state_link\n",
+ " r = requests.get(base_url + state_link)\n",
+ " soup = BeautifulSoup(r.text)\n",
+ " rows = soup.find_all('tr')\n",
+ " rows = [row for row in rows if 'Average high' in str(row)]\n",
+ " high_temps = []\n",
+ " for row in rows:\n",
+ " tds = row.find_all('td')\n",
+ " for i in range(1,7):\n",
+ " high_temps.append(tds[i].text)\n",
+ " s = soup.title.string\n",
+ " state = s[s.find(' '):s.find('-')].strip()\n",
+ " data[state] = high_temps\n",
+ "print(data)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Save to CSV file\n",
+ "Lastly, we might want to write all this data to a CSV file. \n",
+ "Here's a quick easy way to do that."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 60,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import csv\n",
+ "\n",
+ "with open('high_temps.csv','w') as f:\n",
+ " w = csv.writer(f)\n",
+ " w.writerows(data.items())"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/Web Data Mining/Python Requests.ipynb b/Web Data Mining/Python Requests.ipynb
new file mode 100644
index 00000000..66f27fed
--- /dev/null
+++ b/Web Data Mining/Python Requests.ipynb
@@ -0,0 +1,419 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python Requests\n",
+ "(c) 2019, Joe James.\n",
+ "MIT License.\n",
+ "\n",
+ "Tutorial on using the [Requests](http://docs.python-requests.org/en/master/user/quickstart/) library to access HTTP requests, GET, POST, PUT, DELETE, HEAD, OPTIONS. \n",
+ "This notebook also covers how to use the Python [JSON](https://docs.python.org/3/library/json.html) library to parse values out of a GET response. \n",
+ "If you don't have the requests library installed you can run 'pip install requests' or some equivalent command for your system in the console window. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 78,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import requests\n",
+ "import json"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 79,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "r = requests.get('https://api.github.com/events')\n",
+ "r = requests.post('https://httpbin.org/post', data = {'name':'Joe'})\n",
+ "r = requests.put('https://httpbin.org/put', data = {'name':'Joe'})\n",
+ "r = requests.delete('https://httpbin.org/delete')\n",
+ "r = requests.head('https://httpbin.org/get')\n",
+ "r = requests.options('https://httpbin.org/get')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### GET Requests - Passing Parameters in URLs\n",
+ "A URL that returns an HTTP response in JSON format is called an API endpoint. \n",
+ "Here's an example, https://httpbin.org/get \n",
+ "\n",
+ "With GET requests we can add parameters onto the URL to retrieve specific data. \n",
+ "We define the params as a dictionary, and add params=payload to the Request. \n",
+ "The Requests library builds the whole URL for us."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 80,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "https://httpbin.org/get?key1=value1&key2=value2\n"
+ ]
+ }
+ ],
+ "source": [
+ "payload = {'key1': 'value1', 'key2': 'value2'}\n",
+ "r = requests.get('https://httpbin.org/get', params=payload)\n",
+ "print(r.url)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Passing a List as a parameter** \n",
+ "Still use key:value pairs, with the list as the value. \n",
+ "You can see here all the different attributes included in an HTTP Request response."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 81,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "URL: https://httpbin.org/get?key1=value1&key2=value2&key2=value3\n",
+ "ENCODING: None\n",
+ "STATUS_CODE: 200\n",
+ "HEADERS: {'Access-Control-Allow-Credentials': 'true', 'Access-Control-Allow-Origin': '*', 'Content-Encoding': 'gzip', 'Content-Type': 'application/json', 'Date': 'Tue, 26 Feb 2019 18:13:35 GMT', 'Server': 'nginx', 'Content-Length': '229', 'Connection': 'keep-alive'}\n",
+ "TEXT: {\n",
+ " \"args\": {\n",
+ " \"key1\": \"value1\", \n",
+ " \"key2\": [\n",
+ " \"value2\", \n",
+ " \"value3\"\n",
+ " ]\n",
+ " }, \n",
+ " \"headers\": {\n",
+ " \"Accept\": \"*/*\", \n",
+ " \"Accept-Encoding\": \"gzip, deflate\", \n",
+ " \"Host\": \"httpbin.org\", \n",
+ " \"User-Agent\": \"python-requests/2.21.0\"\n",
+ " }, \n",
+ " \"origin\": \"99.99.39.153, 99.99.39.153\", \n",
+ " \"url\": \"https://httpbin.org/get?key1=value1&key2=value2&key2=value3\"\n",
+ "}\n",
+ "\n",
+ "CONTENT: b'{\\n \"args\": {\\n \"key1\": \"value1\", \\n \"key2\": [\\n \"value2\", \\n \"value3\"\\n ]\\n }, \\n \"headers\": {\\n \"Accept\": \"*/*\", \\n \"Accept-Encoding\": \"gzip, deflate\", \\n \"Host\": \"httpbin.org\", \\n \"User-Agent\": \"python-requests/2.21.0\"\\n }, \\n \"origin\": \"99.99.39.153, 99.99.39.153\", \\n \"url\": \"https://httpbin.org/get?key1=value1&key2=value2&key2=value3\"\\n}\\n'\n",
+ "JSON: >\n"
+ ]
+ }
+ ],
+ "source": [
+ "payload = {'key1': 'value1', 'key2': ['value2', 'value3']}\n",
+ "r = requests.get('https://httpbin.org/get', params=payload)\n",
+ "print('URL: ', r.url)\n",
+ "print('ENCODING: ', r.encoding)\n",
+ "print('STATUS_CODE: ', r.status_code)\n",
+ "print('HEADERS: ', r.headers)\n",
+ "print('TEXT: ', r.text)\n",
+ "print('CONTENT: ', r.content)\n",
+ "print('JSON: ', r.json)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### POST Requests\n",
+ "We can add parameters to a POST request in Dictionary format, but we use data=payload. \n",
+ "POST requests are used to upload new records to the server. \n",
+ "POST would typically be used to get data from a web form and submit it to the server. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 82,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\n",
+ " \"args\": {}, \n",
+ " \"data\": \"\", \n",
+ " \"files\": {}, \n",
+ " \"form\": {\n",
+ " \"key1\": \"value1\", \n",
+ " \"key2\": \"value2\"\n",
+ " }, \n",
+ " \"headers\": {\n",
+ " \"Accept\": \"*/*\", \n",
+ " \"Accept-Encoding\": \"gzip, deflate\", \n",
+ " \"Content-Length\": \"23\", \n",
+ " \"Content-Type\": \"application/x-www-form-urlencoded\", \n",
+ " \"Host\": \"httpbin.org\", \n",
+ " \"User-Agent\": \"python-requests/2.21.0\"\n",
+ " }, \n",
+ " \"json\": null, \n",
+ " \"origin\": \"99.99.39.153, 99.99.39.153\", \n",
+ " \"url\": \"https://httpbin.org/post/"\n",
+ "}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "r = requests.post('https://httpbin.org/post', data = {'name':'Joe'})\n",
+ "\n",
+ "payload = {'key1': 'value1', 'key2': 'value2'}\n",
+ "r = requests.post(\"https://httpbin.org/post/", data=payload)\n",
+ "print(r.text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Using Requests to GET Currency Exchange Data\n",
+ "Here's a handy endpoint where we can GET foreign currency exchange rates in JSON format, https://api.exchangeratesapi.io/latest"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 83,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\"rates\":{\"MXN\":21.7145,\"AUD\":1.5897,\"HKD\":8.9178,\"RON\":4.7626,\"HRK\":7.4275,\"CHF\":1.1371,\"IDR\":15917.9,\"CAD\":1.5024,\"USD\":1.1361,\"ZAR\":15.752,\"JPY\":125.93,\"BRL\":4.2574,\"HUF\":317.06,\"CZK\":25.663,\"NOK\":9.7725,\"INR\":80.853,\"PLN\":4.3282,\"ISK\":136.1,\"PHP\":59.144,\"SEK\":10.5858,\"ILS\":4.1148,\"GBP\":0.86055,\"SGD\":1.5332,\"CNY\":7.6077,\"TRY\":6.0254,\"MYR\":4.6157,\"RUB\":74.6158,\"NZD\":1.652,\"KRW\":1270.0,\"THB\":35.583,\"BGN\":1.9558,\"DKK\":7.4616},\"base\":\"EUR\",\"date\":\"2019-02-26\"}\n"
+ ]
+ }
+ ],
+ "source": [
+ "url = 'https://api.exchangeratesapi.io/latest'\n",
+ "r = requests.get(url)\n",
+ "print(r.text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**It looks like the default is base:EUR, but we want exchange rates for USD, so we can pass in a parameter for base. \n",
+ "We can also put in any date we want.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 84,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\"rates\":{\"MXN\":18.8315549401,\"AUD\":1.2571475116,\"HKD\":7.823572534,\"RON\":3.7694876599,\"HRK\":6.0552252179,\"CHF\":0.9610654069,\"IDR\":13307.03754989,\"CAD\":1.2432190274,\"USD\":1.0,\"JPY\":110.621487334,\"BRL\":3.1959762157,\"PHP\":50.2997474953,\"CZK\":20.7957970188,\"NOK\":7.8771686894,\"INR\":63.5175531482,\"PLN\":3.3954549157,\"MYR\":3.9560153132,\"ZAR\":12.302191089,\"ILS\":3.399609025,\"GBP\":0.7252830496,\"SGD\":1.3214140262,\"HUF\":251.6086991936,\"EUR\":0.8145312373,\"CNY\":6.4380548994,\"TRY\":3.7828459721,\"SEK\":8.0096929217,\"RUB\":56.4333306182,\"NZD\":1.3706931661,\"KRW\":1063.5660177568,\"THB\":31.9247373137,\"BGN\":1.5930601939,\"DKK\":6.0679319052},\"base\":\"USD\",\"date\":\"2018-01-15\"}\n"
+ ]
+ }
+ ],
+ "source": [
+ "url = 'https://api.exchangeratesapi.io/2018-01-15'\n",
+ "r = requests.get(url, params={'base':'USD'})\n",
+ "print(r.text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Decoding JSON data\n",
+ "Now we have the rates in JSON format. We need to convert that to usable data. \n",
+ "The JSON library basically has two functions: \n",
+ "- json.loads( ) converts a text string into Python dict/list objects. \n",
+ "- json.dumps( ) converts dict/list objects into a string. \n",
+ "\n",
+ "We need to decode the JSON data into a dictionary, then get the rate for GBP, convert it to a float, and do a conversion."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 85,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'MXN': 18.8315549401, 'AUD': 1.2571475116, 'HKD': 7.823572534, 'RON': 3.7694876599, 'HRK': 6.0552252179, 'CHF': 0.9610654069, 'IDR': 13307.03754989, 'CAD': 1.2432190274, 'USD': 1.0, 'JPY': 110.621487334, 'BRL': 3.1959762157, 'PHP': 50.2997474953, 'CZK': 20.7957970188, 'NOK': 7.8771686894, 'INR': 63.5175531482, 'PLN': 3.3954549157, 'MYR': 3.9560153132, 'ZAR': 12.302191089, 'ILS': 3.399609025, 'GBP': 0.7252830496, 'SGD': 1.3214140262, 'HUF': 251.6086991936, 'EUR': 0.8145312373, 'CNY': 6.4380548994, 'TRY': 3.7828459721, 'SEK': 8.0096929217, 'RUB': 56.4333306182, 'NZD': 1.3706931661, 'KRW': 1063.5660177568, 'THB': 31.9247373137, 'BGN': 1.5930601939, 'DKK': 6.0679319052}\n",
+ "0.7252830496\n",
+ "200USD = 145.05660992 GBP\n"
+ ]
+ }
+ ],
+ "source": [
+ "rates_json = json.loads(r.text)['rates']\n",
+ "print(rates_json)\n",
+ "print(rates_json['GBP'])\n",
+ "gbp = float(rates_json['GBP'])\n",
+ "print('200USD = ', str(gbp * 200), 'GBP')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Using Requests to GET Song Data\n",
+ "Every API has documentation on how to use it. \n",
+ "You can find the docs for this Song Data API [here.](https://documenter.getpostman.com/view/3719697/RzfarXB4)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 86,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[{\"id\":12,\"name\":\"Beatles\",\"year_started\":1960,\"year_quit\":1970,\"text\":\"Beatles\"},{\"id\":14,\"name\":\"Dario G\",\"year_started\":1997,\"year_quit\":null,\"text\":\"Dario G\"},{\"id\":16,\"name\":\"Fleetwood Mac\",\"year_started\":1967,\"year_quit\":null,\"text\":\"Fleetwood Mac\"},{\"id\":17,\"name\":\"Blink 182\",\"year_started\":1992,\"year_quit\":null,\"text\":\"Blink 182\"},{\"id\":18,\"name\":\"Bloc Party\",\"year_started\":2002,\"year_quit\":null,\"text\":\"Bloc Party\"},{\"id\":19,\"name\":\"The Temper Trap\",\"year_started\":2005,\"year_quit\":null,\"text\":\"The Temper Trap\"},{\"id\":20,\"name\":\"MGMT\",\"year_started\":2002,\"year_quit\":null,\"text\":\"MGMT\"},{\"id\":21,\"name\":\"Coldplay\",\"year_started\":1996,\"year_quit\":null,\"text\":\"Coldplay\"},{\"id\":22,\"name\":\"\n"
+ ]
+ }
+ ],
+ "source": [
+ "url = 'https://musicdemons.com/api/v1/artist'\n",
+ "r = requests.get(url)\n",
+ "print(r.text[:700])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 87,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\"id\":21,\"name\":\"Coldplay\",\"year_started\":1996,\"year_quit\":null,\"text\":\"Coldplay\"}\n"
+ ]
+ }
+ ],
+ "source": [
+ "url = 'https://musicdemons.com/api/v1/artist/21'\n",
+ "r = requests.get(url)\n",
+ "print(r.text)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 88,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{\"id\":21,\"name\":\"Coldplay\",\"year_started\":1996,\"year_quit\":null,\"text\":\"Coldplay\",\"songs\":[{\"id\":1,\"title\":\"Something Just Like This\",\"released\":\"02\\/22\\/2017\",\"text\":\"Something Just Like This\",\"youtube_id\":\"FM7MFYoylVs\",\"pivot\":{\"artist_id\":21,\"song_id\":1},\"subject\":{\"id\":226,\"subjectable_id\":1,\"subjectable_type\":\"App\\\\Entities\\\\MusicDemons\\\\Song\"}},{\"id\":11,\"title\":\"Hymn For The Weekend\",\"released\":\"01\\/25\\/2016\",\"text\":\"Hymn For The Weekend\",\"youtube_id\":\"YykjpeuMNEk\",\"pivot\":{\"artist_id\":21,\"song_id\":11},\"subject\":{\"id\":233,\"subjectable_id\":11,\"subjectable_type\":\"App\\\\Entities\\\\MusicDemons\\\\Song\"}},{\"id\":78,\"title\":\"Sky Full Of Stars\",\"released\":\"05\\/02\\/2014\",\"text\":\"Sky Full Of Stars\",\n"
+ ]
+ }
+ ],
+ "source": [
+ "url = 'https://musicdemons.com/api/v1/artist/21'\n",
+ "headers = {'with': 'songs,members'}\n",
+ "r = requests.get(url, headers=headers)\n",
+ "print(r.text[:700])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 89,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Coldplay\n",
+ "Something Just Like This\n",
+ "Hymn For The Weekend\n",
+ "Sky Full Of Stars\n",
+ "Fix You\n",
+ "Brothers & Sisters\n",
+ "Shiver\n",
+ "The Scientist\n",
+ "Yellow\n",
+ "Trouble\n",
+ "Every Teardrop Is a Waterfall\n",
+ "Life in Technicolor ii\n",
+ "Adventure Of A Lifetime\n",
+ "Magic\n",
+ "The Hardest Part\n",
+ "Viva la Vida\n",
+ "1.36\n",
+ "42\n",
+ "A Head Full of Dreams\n",
+ "A Hopeful Transmission\n",
+ "A Message\n",
+ "A Rush of Blood to the Head\n",
+ "Princess of China\n"
+ ]
+ }
+ ],
+ "source": [
+ "import json\n",
+ "text_json = json.loads(r.text)\n",
+ "print(text_json['name'])\n",
+ "for song in text_json['songs']:\n",
+ " print(song['title'])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Tips on breaking down JSON \n",
+ "To get data out of a JSON object, which is a combination of lists and dictionaries, \n",
+ "just remember for lists you need a numerical index, and for key-value pairs you need a text index. \n",
+ "So if the object looks like this, {\"cars\":[\"id\":1,\"model\":\"Camry\"... you can access the model of the first car with text['cars'][0]['model']"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/addition of two number b/addition of two number
new file mode 100644
index 00000000..d31335e3
--- /dev/null
+++ b/addition of two number
@@ -0,0 +1,9 @@
+# Store input numbers
+num1 = input('Enter first number: ')
+num2 = input('Enter second number: ')
+
+# Add two numbers
+sum = float(num1) + float(num2)
+
+# Display the sum
+print('The sum of {0} and {1} is {2}'.format(num1, num2, sum))
diff --git a/deep_copy.ipynb b/deep_copy.ipynb
new file mode 100644
index 00000000..a11d7052
--- /dev/null
+++ b/deep_copy.ipynb
@@ -0,0 +1,248 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python: how to Copy and Deep Copy Python Lists \n",
+ "(c) Joe James 2023"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Assignment is not a Copy\n",
+ "listA = listB does not create a copy. Changes to one list will be reflected in the other.\n",
+ "listA and listB both reference the exact same list."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 44, 6, [1, 3]]\n",
+ "140554034568968\n",
+ "140554034568968\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = listA\n",
+ "listB[1] = 44\n",
+ "print(listA)\n",
+ "print(id(listA))\n",
+ "print(id(listB))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Shallow copy using the list() constructor\n",
+ "Shallow copy only works for 1D lists of native data types. \n",
+ "Sublists, dicts, and other objects will retain the same referece to those objects."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = list(listA)\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Other ways to make a Shallow copy\n",
+ "List comprehensions, list.copy(), or copy.copy() can also be used to make *shallow* copies"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = [x for x in listA]\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = listA.copy()\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import copy\n",
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = copy.copy(listA)\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### How to Deep Copy a Python List\n",
+ "use copy.deepcopy()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [1, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import copy\n",
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = copy.deepcopy(listA)\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Deepcopy with Objects"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "140554035637216 140554035637104\n",
+ "140554035637216 140554035637216\n",
+ "140554035637216 140554035637048\n"
+ ]
+ }
+ ],
+ "source": [
+ "class Pony():\n",
+ " def __init__(self, n):\n",
+ " self.name = n\n",
+ " \n",
+ "# copy.copy on an object gives you 2 unique objects (with same attributes)\n",
+ "pony1 = Pony('Pinto')\n",
+ "pony2 = copy.copy(pony1)\n",
+ "print(id(pony1), id(pony2))\n",
+ "\n",
+ "# copy.copy on a list of objects gives you 2 unique lists containing the exact same objects \n",
+ "# (ie. new list is a shallow copy)\n",
+ "m = [pony1, pony2]\n",
+ "n = copy.copy (m)\n",
+ "print(id(m[0]), id(n[0]))\n",
+ "\n",
+ "# use copy.deepcopy to deep copy a list of objects\n",
+ "n = copy.deepcopy (m)\n",
+ "print(id(m[0]), id(n[0]))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/dict_comprehensions.py b/dict_comprehensions.py
new file mode 100644
index 00000000..890c7221
--- /dev/null
+++ b/dict_comprehensions.py
@@ -0,0 +1,54 @@
+# Python Dictionary Comprehensions
+# (c) Joe James 2023
+
+# 1. math function to compute values using list
+dict1 = {x: 2*x for x in [0, 2, 4, 6]}
+print ('1. ', dict1)
+
+# 2. math function to compute values using range
+dict2 = {x: x**2 for x in range(0, 7, 2)}
+print ('2. ', dict2)
+
+# 3. from chars in a string
+dict3 = {x: ord(x) for x in 'Kumar'}
+print ('3. ', dict3)
+
+# 4. given lists of keys & values
+x = ['Aditii', 'Brandon', 'Clumley', 'Magomed', 'Rishi']
+y = [1, 2, 3, 13, 18]
+dict4 = {i: j for (i,j) in zip(x,y)}
+print ('4. ', dict4)
+
+# 5. from chars in a string
+x = "python"
+dict5 = {i: 3*i.upper() for i in x}
+print('5. ', dict5)
+
+# 6. list comprehension for the value
+x = [2, 4, 6, 8]
+y = [5, 10, 15, 20]
+dict6 = {i: [i + 2*j for j in y] for i in x}
+print('6. ', dict6)
+
+#7. using items
+x = {'A':10, 'B':20, 'C':30}
+dict7 = {i: j*2 for (i,j) in x.items()}
+print('7. ', dict7)
+
+# 8. conditional comprehension
+dict8 = {i: i**3 for i in range(10) if i%2 == 0}
+print('8. ', dict8)
+
+# 9. if-else conditional comprehension
+x = {'A':10, 'B':20, 'C':30}
+dict9 = {i: (j if j < 15 else j+100) for (i,j) in x.items()}
+print('9. ', dict9)
+
+# 10. transformation from an existing dict
+x = {'A':10, 'B':20, 'C':30}
+dict10 = {i: x[i]+1 for i in x}
+print('10. ', dict10)
+
+
+
+
diff --git a/exception-handling.py b/exception-handling.py
index 8dd489f1..57ddf118 100644
--- a/exception-handling.py
+++ b/exception-handling.py
@@ -1,3 +1,20 @@
+# something more about try except
+# basic syntax
+'''
+try:
+ code1
+
+except:
+ some code that will execute if code 1 fails or raise some error
+
+else:
+ this code is executed only if try was succesful i.e no error in code1
+
+finally:
+
+ this code will execute in every situation if try fails or not
+'''
+
filename = 'exception_data.txt'
# Outer try block catches file name or file doesn't exist errors.
try:
@@ -28,4 +45,22 @@ def this_fails():
try:
this_fails()
except ZeroDivisionError as err:
- print('Handling run-time error:', err)
\ No newline at end of file
+ print('Handling run-time error:', err)
+
+
+def divide_me(n):
+ x = 1/n
+
+i = int(input('enter a number '))
+try:
+ divide_me(i)
+
+except Exception as e:
+ print(e) # this will print the error msg but don't kill the execution of program
+
+else:
+ print('Your Code Run Successfully') # this will execute if divide_me(i) run sucessfully without an error
+
+finally:
+ print('thanks') # this will execute in every condition
+
diff --git a/factorial.py b/factorial.py
index 2a70c3dc..8e4a65ea 100644
--- a/factorial.py
+++ b/factorial.py
@@ -14,6 +14,6 @@ def get_iterative_factorial(n):
for i in range(1, n+1):
fact *= i
return fact
-
+print("input should be an integer")
print(get_recursive_factorial(6))
-print(get_iterative_factorial(6))
\ No newline at end of file
+print(get_iterative_factorial(6))
diff --git a/flatten_list.py b/flatten_list.py
new file mode 100644
index 00000000..3f3c57df
--- /dev/null
+++ b/flatten_list.py
@@ -0,0 +1,27 @@
+# Python Flatten Nested Lists
+# (c) Joe James 2023
+
+# list comprehension method
+def flatten1 (myList):
+ return [i for j in myList for i in j]
+
+# recursive method
+def flatten2 (myList):
+ flatList = []
+ for item in myList:
+ if isinstance(item, list):
+ flatList.extend(flatten2(item))
+ else:
+ flatList.append(item)
+ return flatList
+
+list1 = [[0], [1, 2], [3, [4, 5]], [6], [7]]
+list2 = [0, [1, 2], [3, [4, 5]], [6], 7]
+
+print("flatten1(list1): ", flatten1(list1)) # works, but only flattens 1 layer of sublists
+# print(flatten1(list2)) # error - can't work with list of ints and sublists of ints
+
+print("flatten2(list1): ", flatten2(list1))
+print("flatten2(list2): ", flatten2(list2))
+
+
diff --git a/graph_adjacency-list.py b/graph_adjacency-list.py
index fec2f958..ebc3f47c 100644
--- a/graph_adjacency-list.py
+++ b/graph_adjacency-list.py
@@ -4,9 +4,9 @@ def __init__(self, n):
self.name = n
self.neighbors = list()
- def add_neighbor(self, v):
+ def add_neighbor(self, v, weight):
if v not in self.neighbors:
- self.neighbors.append(v)
+ self.neighbors.append((v, weight))
self.neighbors.sort()
class Graph:
@@ -19,11 +19,11 @@ def add_vertex(self, vertex):
else:
return False
- def add_edge(self, u, v):
+ def add_edge(self, u, v, weight=0):
if u in self.vertices and v in self.vertices:
# my YouTube video shows a silly for loop here, but this is a much faster way to do it
- self.vertices[u].add_neighbor(v)
- self.vertices[v].add_neighbor(u)
+ self.vertices[u].add_neighbor(v, weight)
+ self.vertices[v].add_neighbor(u, weight)
return True
else:
return False
diff --git a/graph_adjacency-matrix.py b/graph_adjacency-matrix.py
index b6d05589..3f315001 100644
--- a/graph_adjacency-matrix.py
+++ b/graph_adjacency-matrix.py
@@ -1,4 +1,5 @@
# implementation of an undirected graph using Adjacency Matrix, with weighted or unweighted edges
+# its definitely work
class Vertex:
def __init__(self, n):
self.name = n
@@ -46,4 +47,4 @@ def print_graph(self):
for edge in edges:
g.add_edge(edge[:1], edge[1:])
-g.print_graph()
\ No newline at end of file
+g.print_graph()
diff --git a/lcm.py b/lcm.py
index 8d584ab7..a308141e 100644
--- a/lcm.py
+++ b/lcm.py
@@ -1,4 +1,4 @@
-# computes Lowest Common Multiple LCM / Least Common Denominator LCD
+# computes Lowest Common Multiple (LCM) / Least Common Denominator (LCD)
# useful for adding and subtracting fractions
# 2 numbers
@@ -21,4 +21,4 @@ def lcm3(nums):
print(str(lcm(7, 12)))
nums = [3, 2, 16]
-print(str(lcm3(nums)))
\ No newline at end of file
+print(str(lcm3(nums)))
diff --git a/match statements.ipynb b/match statements.ipynb
new file mode 100644
index 00000000..a8fc422d
--- /dev/null
+++ b/match statements.ipynb
@@ -0,0 +1,327 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python 10 - Structural Pattern Matching\n",
+ "### match statements \n",
+ "Very similar to switch/case statements in C, Java, and Javascript. \n",
+ "Can be used in lieu of if/elif/else blocks. \n",
+ "[documentation](https://www.python.org/dev/peps/pep-0622/)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Can use integer for match variable..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "large\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 3\n",
+ "\n",
+ "match var:\n",
+ " case 1:\n",
+ " print('small')\n",
+ " case 2:\n",
+ " print('medium')\n",
+ " case 3:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### ...or floating point..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "large\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 1.5\n",
+ "\n",
+ "match var:\n",
+ " case 1.3:\n",
+ " print('small')\n",
+ " case 1.4:\n",
+ " print('medium')\n",
+ " case 1.5:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### ...or Tuple...\n",
+ "Note here we also use a variable to receive *any* value."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "on x-axis\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = (8,0)\n",
+ "\n",
+ "match var:\n",
+ " case (0,x):\n",
+ " print('on y-axis')\n",
+ " case (x,0):\n",
+ " print('on x-axis')\n",
+ " case (x,y):\n",
+ " print('not on axis')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### ...or String"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "small\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = \"S\"\n",
+ "\n",
+ "match var:\n",
+ " case \"S\":\n",
+ " print('small')\n",
+ " case \"Med\":\n",
+ " print('medium')\n",
+ " case \"Lg\":\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### The Default case _ \n",
+ "The default case, using underscore, is optional. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "large\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 4\n",
+ "\n",
+ "match var:\n",
+ " case 1:\n",
+ " print('small')\n",
+ " case 2:\n",
+ " print('medium')\n",
+ " case _:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Conditionals in case \n",
+ "*or* conditions (using bar) are supported in case statements."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "small\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 2\n",
+ "\n",
+ "match var:\n",
+ " case 2 | 3:\n",
+ " print('small')\n",
+ " case 4 | 5 | 6:\n",
+ " print('medium')\n",
+ " case _:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### No breaks needed\n",
+ "*if* statements are supported, but must follow syntax, case var if (inequality expression). \n",
+ "\n",
+ "Note that you do not need break statements. The match block will automatically end execution after one case is executed."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "A\n",
+ "F\n"
+ ]
+ }
+ ],
+ "source": [
+ "def print_grade(score):\n",
+ " match score:\n",
+ " # case score > 90 this does not work!\n",
+ " case score if score >= 90:\n",
+ " print('A')\n",
+ " case score if score >= 80:\n",
+ " print('B')\n",
+ " case score if score >= 70:\n",
+ " print('C')\n",
+ " case score if score >= 60:\n",
+ " print('D')\n",
+ " case _:\n",
+ " print('F')\n",
+ " \n",
+ "print_grade(94)\n",
+ "print_grade(48)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Python Objects \n",
+ "Match statements can also use Python objects and instance variables. \n",
+ "In the final case here we could have used _ default case, but instead used x so that we could use the value of x in our print statement."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "medium\n",
+ "Size XL is not recognized.\n"
+ ]
+ }
+ ],
+ "source": [
+ "class T_shirt:\n",
+ " def __init__(self, s):\n",
+ " self.size = s\n",
+ "\n",
+ " def order(self):\n",
+ " match self.size:\n",
+ " case 'S' | 'Sm':\n",
+ " print('small')\n",
+ " case 'M' | 'Med':\n",
+ " print('medium')\n",
+ " case 'L' | 'Lg':\n",
+ " print('large')\n",
+ " case x:\n",
+ " print(f'Size {x} is not recognized.')\n",
+ " \n",
+ "shirt1 = T_shirt('Med')\n",
+ "shirt1.order()\n",
+ "\n",
+ "shirt2 = T_shirt('XL')\n",
+ "shirt2.order()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/python oriented programming b/python oriented programming
new file mode 100644
index 00000000..8e3a4499
--- /dev/null
+++ b/python oriented programming
@@ -0,0 +1,34 @@
+class Mobile:
+ def make_call(self):
+ print("i am making a call")
+ def play_game(self):
+ print("i am playing games")
+
+m1=Mobile()
+
+m1.make_call()
+
+m1.play_game()
+
+class Mobile:
+ def set_color(self,color):
+ self.color=color
+ def set_cost(self,cost):
+ self.cost=cost
+ def show_color(self):
+ print("black")
+ def show_price(self):
+ print("5000")
+ def make_call(self):
+ print("i am making a call")
+ def play_game(self):
+ print("i am playing games")
+
+
+
+m2=Mobile()
+
+m2.show_price()
+
+m2.show_color()
+
diff --git a/remove_from_list.py b/remove_from_list.py
new file mode 100644
index 00000000..9619664f
--- /dev/null
+++ b/remove_from_list.py
@@ -0,0 +1,48 @@
+# Python: del vs pop vs remove from a list
+# (c) Joe James 2023
+
+def get_dogs():
+ return ['Fido', 'Rover', 'Spot', 'Duke', 'Chip', 'Spot']
+
+dogs = get_dogs()
+print(dogs)
+
+# Use pop() to remove last item or an item by index and get the returned value.
+print('1. pop last item from list:')
+myDog = dogs.pop()
+print(myDog, dogs)
+
+dogs = get_dogs()
+print('2. pop item with index 1:')
+myDog = dogs.pop(1)
+print(myDog, dogs)
+
+# Use remove() to delete an item by value. (raises ValueError if value not found)
+dogs = get_dogs()
+print('3. remove first Spot from list:')
+dogs.remove('Spot')
+print(dogs)
+
+# Use del to remove an item or range of items by index. Or delete entire list.
+dogs = get_dogs()
+print('4. del item with index 3:')
+del(dogs[3])
+print(dogs)
+
+dogs = get_dogs()
+print('5. del items [1:3] from list:')
+del(dogs[1:3])
+print(dogs)
+
+dogs = get_dogs()
+print('6. del entire list:')
+del(dogs)
+print(dogs)
+
+
+
+
+
+
+
+