\n"
+ ]
+ }
+ ],
+ "source": [
+ "def func(**losers):\n",
+ " print(losers)\n",
+ " print(losers['a'])\n",
+ " print(type(losers))\n",
+ " \n",
+ "func(a='Edsel', b='Betamax', c='mGaetz')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This works, but it's kinda annoying because you have to use strings for the keys, so some normal Python dictionaries will give you an error. {1:'Edsel', 2:'Betamax'} fails."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Edsel\n"
+ ]
+ }
+ ],
+ "source": [
+ "def func(a, b, c):\n",
+ " print(a)\n",
+ "\n",
+ "losers = {'a':'Edsel', 'b':'Betamax', 'c':'mGaetz'}\n",
+ "func(**losers)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/Web Data Mining/Python BeautifulSoup Web Scraping Tutorial.ipynb b/Web Data Mining/Python BeautifulSoup Web Scraping Tutorial.ipynb
new file mode 100644
index 00000000..f7d55aa9
--- /dev/null
+++ b/Web Data Mining/Python BeautifulSoup Web Scraping Tutorial.ipynb
@@ -0,0 +1,514 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python BeautifulSoup Web Scraping Tutorial\n",
+ "Learn to scrape data from the web using the Python BeautifulSoup bs4 library. \n",
+ "BeautifulSoup makes it easy to parse useful data out of an HTML page. \n",
+ "First install the bs4 library on your system by running at the command line, \n",
+ "*pip install beautifulsoup4* or *easy_install beautifulsoup4* (or bs4) \n",
+ "See [BeautifulSoup official documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) for the complete set of functions.\n",
+ "\n",
+ "### Import requests so we can fetch the html content of the webpage\n",
+ "You can see our example page has about 28k characters."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 61,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "28556\n"
+ ]
+ }
+ ],
+ "source": [
+ "import requests\n",
+ "r = requests.get('https://www.usclimatedata.com/climate/united-states/us')\n",
+ "print(len(r.text))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Import BeautifulSoup, and convert your HTML into a bs4 object\n",
+ "Now we can access specific HTML tags on the page using dot, just like a JSON object."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Climate United States - normals and averages\n",
+ "Climate United States - normals and averages\n"
+ ]
+ }
+ ],
+ "source": [
+ "from bs4 import BeautifulSoup\n",
+ "soup = BeautifulSoup(r.text)\n",
+ "print(soup.title)\n",
+ "print(soup.title.string)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Drill into the bs4 object to access page contents\n",
+ "soup.p will give you the contents of the first paragraph tag on the page. \n",
+ "soup.a gives you anchors / links on the page. \n",
+ "Get contents of an attribute inside an HTML tag using square brackets and perentheses. \n",
+ "Use .parent to get the parent object, and .next_sibling to get the next peer object. \n",
+ "**Use your browser's *inspect element* feature to find the tag for the data you want.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "You are here: United States
\n",
+ "You are here: United States\n",
+ "\n",
+ "
\n",
+ "\n",
+ "US Climate Data on Facebook\n",
+ "\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(soup.p)\n",
+ "print(soup.p.text)\n",
+ "print(soup.a)\n",
+ "print(soup.a['title'])\n",
+ "print()\n",
+ "print(soup.p.parent)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Prettify() is handy for formatted printing \n",
+ "but note this works only on bs4 objects, not on strings, dicts or lists. For those you need to import pprint."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ " - \n",
+ " \n",
+ " Monthly\n",
+ " \n",
+ "
\n",
+ " \n",
+ " \n",
+ " You are here:\n",
+ " \n",
+ " United States\n",
+ " \n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(soup.p.parent.prettify())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### We need all the state links on this page\n",
+ "First we find_all anchor tags, and print out the href attribute, which is the actual link url. \n",
+ "But we see the result includes some links we don't want, so we need to filter those out."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "https://www.facebook.com/yourweatherservice/n",
+ "https://twitter.com/usclimatedata/n",
+ "http://www.usclimatedata.com/n",
+ "/climate/united-states/us\n",
+ "#summary\n",
+ "/climate/united-states/us\n",
+ "#\n",
+ "#\n",
+ "/climate/alabama/united-states/3170\n",
+ "/climate/kentucky/united-states/3187\n",
+ "/climate/north-dakota/united-states/3204\n",
+ "/climate/alaska/united-states/3171\n",
+ "/climate/louisiana/united-states/3188\n",
+ "/climate/ohio/united-states/3205\n",
+ "/climate/arizona/united-states/3172\n",
+ "/climate/maine/united-states/3189\n",
+ "/climate/oklahoma/united-states/3206\n",
+ "/climate/arkansas/united-states/3173\n",
+ "/climate/maryland/united-states/1872\n",
+ "/climate/oregon/united-states/3207\n",
+ "/climate/california/united-states/3174\n",
+ "/climate/massachusetts/united-states/3191\n",
+ "/climate/pennsylvania/united-states/3208\n",
+ "/climate/colorado/united-states/3175\n",
+ "/climate/michigan/united-states/3192\n",
+ "/climate/rhode-island/united-states/3209\n",
+ "/climate/connecticut/united-states/3176\n",
+ "/climate/minnesota/united-states/3193\n",
+ "/climate/south-carolina/united-states/3210\n",
+ "/climate/delaware/united-states/3177\n",
+ "/climate/mississippi/united-states/3194\n",
+ "/climate/south-dakota/united-states/3211\n",
+ "/climate/district-of-columbia/united-states/3178\n",
+ "/climate/missouri/united-states/3195\n",
+ "/climate/tennessee/united-states/3212\n",
+ "/climate/florida/united-states/3179\n",
+ "/climate/montana/united-states/919\n",
+ "/climate/texas/united-states/3213\n",
+ "/climate/georgia/united-states/3180\n",
+ "/climate/nebraska/united-states/3197\n",
+ "/climate/utah/united-states/3214\n",
+ "/climate/hawaii/united-states/3181\n",
+ "/climate/nevada/united-states/3198\n",
+ "/climate/vermont/united-states/3215\n",
+ "/climate/idaho/united-states/3182\n",
+ "/climate/new-hampshire/united-states/3199\n",
+ "/climate/virginia/united-states/3216\n",
+ "/climate/illinois/united-states/3183\n",
+ "/climate/new-jersey/united-states/3200\n",
+ "/climate/washington/united-states/3217\n",
+ "/climate/indiana/united-states/3184\n",
+ "/climate/new-mexico/united-states/3201\n",
+ "/climate/west-virginia/united-states/3218\n",
+ "/climate/iowa/united-states/3185\n",
+ "/climate/new-york/united-states/3202\n",
+ "/climate/wisconsin/united-states/3219\n",
+ "/climate/kansas/united-states/3186\n",
+ "/climate/north-carolina/united-states/3203\n",
+ "/climate/wyoming/united-states/3220\n",
+ "https://www.yourweatherservice.com/n",
+ "https://www.climatedata.eu/n",
+ "https://www.weernetwerk.nl/n",
+ "/about-us.php\n",
+ "/disclaimer.php\n",
+ "/cookies.php\n"
+ ]
+ }
+ ],
+ "source": [
+ "for link in soup.find_all('a'):\n",
+ " print(link.get('href'))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Filter urls using string functions\n",
+ "We just add an *if* to check conditions, then add the good ones to a list. \n",
+ "In the end we get 51 state links, including Washington DC."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "51\n"
+ ]
+ }
+ ],
+ "source": [
+ "base_url = 'https://www.usclimatedata.com/'\n",
+ "state_links = []\n",
+ "for link in soup.find_all('a'):\n",
+ " url = link.get('href')\n",
+ " if url and '/climate/' in url and '/climate/united-states/us' not in url:\n",
+ " state_links.append(url)\n",
+ "print(len(state_links))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Test getting the data for one state\n",
+ "then print the title for that page."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Climate Ohio - temperature, rainfall and average\n"
+ ]
+ }
+ ],
+ "source": [
+ "r = requests.get(base_url + state_links[5])\n",
+ "soup = BeautifulSoup(r.text)\n",
+ "print(soup.title.string)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### The data we need is in *tr* tags\n",
+ "But look, there are 58 tr tags on the page, and we only want 2 of them - the *Average high* rows."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 37,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "58\n"
+ ]
+ }
+ ],
+ "source": [
+ "rows = soup.find_all('tr')\n",
+ "print(len(rows))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Filter rows, and add temp data to a list\n",
+ "We use a list comprehension to filter the rows. \n",
+ "Then we have only 2 rows left. \n",
+ "We iterate through those 2 rows, and add all the temps from data cells (td) into a list."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "2\n",
+ "['36', '40', '52', '63', '73', '82', '85', '84', '77', '65', '52', '41']\n"
+ ]
+ }
+ ],
+ "source": [
+ "rows = [row for row in rows if 'Average high' in str(row)]\n",
+ "print(len(rows))\n",
+ "\n",
+ "high_temps = []\n",
+ "for row in rows:\n",
+ " tds = row.find_all('td')\n",
+ " for i in range(1,7):\n",
+ " high_temps.append(tds[i].text)\n",
+ "print(high_temps)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Get the name of the State\n",
+ "First attempt we just split the title string into a list, and grab the second word. \n",
+ "But that doesn't work for 2-word states like New York and North Carolina. \n",
+ "So instead we slice the string from first blank to the hyphen. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Wyoming\n",
+ "Wyoming\n"
+ ]
+ }
+ ],
+ "source": [
+ "state = soup.title.string.split()[1]\n",
+ "print(state)\n",
+ "s = soup.title.string\n",
+ "state = s[s.find(' '):s.find('-')].strip()\n",
+ "print(state)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Add state name and temp list to the data dictionary\n",
+ "For a single state, this is what our scraped data looks like. \n",
+ "In this example we only got monthly highs by state, but you could drill into cities, and could get lows and precipitation. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'Ohio': ['36', '40', '52', '63', '73', '82', '85', '84', '77', '65', '52', '41']}\n"
+ ]
+ }
+ ],
+ "source": [
+ "data = {}\n",
+ "data[state] = high_temps\n",
+ "print(data)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Put it all together and iterate 51 states\n",
+ "We loop through our 51-state list, and get high temp data for each state, and add it to the data dict. \n",
+ "This combines all our work above into a single for loop. \n",
+ "The result is a dict with 51 states and a list of monthly highs for each."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 59,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'Alabama': ['57', '62', '70', '77', '84', '90', '92', '92', '87', '78', '69', '60'], 'Kentucky': ['40', '45', '55', '66', '75', '83', '87', '86', '79', '68', '55', '44'], 'North Dakota': ['23', '28', '40', '57', '68', '77', '85', '83', '72', '58', '40', '26'], 'Alaska': ['23', '27', '34', '44', '56', '63', '65', '64', '55', '40', '28', '25'], 'Louisiana': ['62', '65', '72', '78', '85', '89', '91', '91', '87', '80', '72', '64'], 'Ohio': ['36', '40', '52', '63', '73', '82', '85', '84', '77', '65', '52', '41'], 'Arizona': ['67', '71', '77', '85', '95', '104', '106', '104', '100', '89', '76', '66'], 'Maine': ['28', '32', '40', '53', '65', '74', '79', '78', '70', '57', '45', '33'], 'Oklahoma': ['50', '55', '63', '72', '80', '88', '94', '93', '85', '73', '62', '51'], 'Arkansas': ['51', '55', '64', '73', '81', '89', '92', '93', '86', '75', '63', '52'], 'Maryland': ['42', '46', '54', '65', '75', '85', '89', '87', '80', '68', '58', '46'], 'Oregon': ['48', '52', '56', '61', '68', '74', '82', '82', '77', '64', '53', '46'], 'California': ['54', '60', '65', '71', '80', '87', '92', '91', '87', '78', '64', '54'], 'Massachusetts': ['36', '39', '45', '56', '66', '76', '81', '80', '72', '61', '51', '41'], 'Pennsylvania': ['40', '44', '53', '64', '74', '83', '87', '85', '78', '67', '56', '45'], 'Colorado': ['45', '46', '54', '61', '72', '82', '90', '88', '79', '66', '52', '45'], 'Michigan': ['30', '33', '44', '58', '69', '78', '82', '80', '73', '60', '47', '34'], 'Rhode Island': ['37', '40', '48', '59', '68', '78', '83', '81', '74', '63', '53', '42'], 'Connecticut': ['37', '40', '47', '58', '68', '77', '82', '81', '74', '63', '53', '42'], 'Minnesota': ['26', '31', '43', '58', '71', '80', '85', '82', '73', '59', '42', '29'], 'South Carolina': ['56', '60', '68', '76', '84', '90', '93', '91', '85', '76', '67', '58'], 'Delaware': ['43', '47', '55', '66', '75', '83', '87', '85', '79', '69', '58', '47'], 'Mississippi': ['56', '60', '69', '76', '83', '89', '92', '92', '87', '77', '67', '58'], 'South Dakota': ['22', '27', '39', '57', '69', '78', '84', '82', '72', '58', '39', '25'], 'District of Columbia': ['42', '44', '53', '64', '75', '83', '87', '84', '78', '67', '55', '45'], 'Missouri': ['40', '45', '56', '67', '75', '83', '88', '88', '80', '69', '56', '43'], 'Tennessee': ['50', '55', '64', '73', '81', '89', '92', '91', '85', '74', '63', '52'], 'Florida': ['64', '67', '74', '80', '87', '91', '92', '92', '88', '81', '73', '65'], 'Montana': ['33', '39', '48', '58', '67', '76', '86', '85', '73', '59', '43', '32'], 'Texas': ['62', '65', '72', '80', '87', '92', '96', '97', '91', '82', '71', '63'], 'Georgia': ['52', '57', '64', '72', '81', '86', '90', '88', '82', '73', '64', '54'], 'Nebraska': ['32', '37', '50', '63', '73', '84', '88', '86', '77', '64', '48', '36'], 'Utah': ['38', '44', '53', '61', '71', '82', '90', '89', '78', '65', '50', '40'], 'Hawaii': ['80', '80', '81', '83', '85', '87', '88', '89', '89', '87', '84', '81'], 'Nevada': ['45', '50', '57', '63', '71', '81', '90', '88', '80', '68', '54', '45'], 'Vermont': ['27', '31', '40', '55', '67', '76', '81', '79', '70', '57', '46', '33'], 'Idaho': ['38', '45', '55', '62', '72', '81', '91', '90', '79', '65', '48', '38'], 'New Hampshire': ['31', '35', '44', '57', '69', '77', '82', '81', '73', '60', '48', '36'], 'Virginia': ['47', '51', '60', '70', '78', '86', '90', '88', '81', '71', '61', '51'], 'Illinois': ['32', '36', '46', '59', '70', '81', '84', '82', '75', '63', '48', '36'], 'New Jersey': ['39', '42', '51', '62', '72', '82', '86', '84', '77', '65', '55', '44'], 'Washington': ['47', '50', '54', '58', '65', '70', '76', '76', '71', '60', '51', '46'], 'Indiana': ['35', '40', '51', '63', '73', '82', '85', '83', '77', '65', '52', '39'], 'New Mexico': ['44', '48', '56', '65', '74', '83', '86', '83', '78', '67', '53', '43'], 'West Virginia': ['42', '47', '56', '68', '75', '82', '85', '84', '78', '68', '57', '46'], 'Iowa': ['31', '36', '49', '62', '72', '82', '86', '84', '76', '63', '48', '34'], 'New York': ['39', '42', '50', '60', '71', '79', '85', '83', '76', '65', '54', '44'], 'Wisconsin': ['29', '33', '42', '54', '65', '75', '80', '78', '71', '59', '46', '33'], 'Kansas': ['40', '45', '56', '67', '76', '85', '89', '89', '80', '68', '55', '42'], 'North Carolina': ['51', '55', '63', '72', '79', '86', '89', '87', '81', '72', '62', '53'], 'Wyoming': ['40', '40', '47', '55', '65', '75', '83', '81', '72', '59', '47', '38']}\n"
+ ]
+ }
+ ],
+ "source": [
+ "data = {}\n",
+ "for state_link in state_links:\n",
+ " url = base_url + state_link\n",
+ " r = requests.get(base_url + state_link)\n",
+ " soup = BeautifulSoup(r.text)\n",
+ " rows = soup.find_all('tr')\n",
+ " rows = [row for row in rows if 'Average high' in str(row)]\n",
+ " high_temps = []\n",
+ " for row in rows:\n",
+ " tds = row.find_all('td')\n",
+ " for i in range(1,7):\n",
+ " high_temps.append(tds[i].text)\n",
+ " s = soup.title.string\n",
+ " state = s[s.find(' '):s.find('-')].strip()\n",
+ " data[state] = high_temps\n",
+ "print(data)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Save to CSV file\n",
+ "Lastly, we might want to write all this data to a CSV file. \n",
+ "Here's a quick easy way to do that."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 60,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import csv\n",
+ "\n",
+ "with open('high_temps.csv','w') as f:\n",
+ " w = csv.writer(f)\n",
+ " w.writerows(data.items())"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/Web Data Mining/Python Requests.ipynb b/Web Data Mining/Python Requests.ipynb
index 6b9274ec..66f27fed 100644
--- a/Web Data Mining/Python Requests.ipynb
+++ b/Web Data Mining/Python Requests.ipynb
@@ -19,7 +19,8 @@
"metadata": {},
"outputs": [],
"source": [
- "import requests"
+ "import requests\n",
+ "import json"
]
},
{
diff --git a/addition of two number b/addition of two number
new file mode 100644
index 00000000..d31335e3
--- /dev/null
+++ b/addition of two number
@@ -0,0 +1,9 @@
+# Store input numbers
+num1 = input('Enter first number: ')
+num2 = input('Enter second number: ')
+
+# Add two numbers
+sum = float(num1) + float(num2)
+
+# Display the sum
+print('The sum of {0} and {1} is {2}'.format(num1, num2, sum))
diff --git a/deep_copy.ipynb b/deep_copy.ipynb
new file mode 100644
index 00000000..a11d7052
--- /dev/null
+++ b/deep_copy.ipynb
@@ -0,0 +1,248 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python: how to Copy and Deep Copy Python Lists \n",
+ "(c) Joe James 2023"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Assignment is not a Copy\n",
+ "listA = listB does not create a copy. Changes to one list will be reflected in the other.\n",
+ "listA and listB both reference the exact same list."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 44, 6, [1, 3]]\n",
+ "140554034568968\n",
+ "140554034568968\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = listA\n",
+ "listB[1] = 44\n",
+ "print(listA)\n",
+ "print(id(listA))\n",
+ "print(id(listB))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Shallow copy using the list() constructor\n",
+ "Shallow copy only works for 1D lists of native data types. \n",
+ "Sublists, dicts, and other objects will retain the same referece to those objects."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = list(listA)\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Other ways to make a Shallow copy\n",
+ "List comprehensions, list.copy(), or copy.copy() can also be used to make *shallow* copies"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = [x for x in listA]\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = listA.copy()\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [55, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import copy\n",
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = copy.copy(listA)\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### How to Deep Copy a Python List\n",
+ "use copy.deepcopy()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[2, 4, 6, [1, 3]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "import copy\n",
+ "listA = [2, 4, 6, [1, 3]]\n",
+ "listB = copy.deepcopy(listA)\n",
+ "listB[1] = 44\n",
+ "listB[3][0] = 55\n",
+ "print(listA)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Deepcopy with Objects"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "140554035637216 140554035637104\n",
+ "140554035637216 140554035637216\n",
+ "140554035637216 140554035637048\n"
+ ]
+ }
+ ],
+ "source": [
+ "class Pony():\n",
+ " def __init__(self, n):\n",
+ " self.name = n\n",
+ " \n",
+ "# copy.copy on an object gives you 2 unique objects (with same attributes)\n",
+ "pony1 = Pony('Pinto')\n",
+ "pony2 = copy.copy(pony1)\n",
+ "print(id(pony1), id(pony2))\n",
+ "\n",
+ "# copy.copy on a list of objects gives you 2 unique lists containing the exact same objects \n",
+ "# (ie. new list is a shallow copy)\n",
+ "m = [pony1, pony2]\n",
+ "n = copy.copy (m)\n",
+ "print(id(m[0]), id(n[0]))\n",
+ "\n",
+ "# use copy.deepcopy to deep copy a list of objects\n",
+ "n = copy.deepcopy (m)\n",
+ "print(id(m[0]), id(n[0]))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/dict_comprehensions.py b/dict_comprehensions.py
new file mode 100644
index 00000000..890c7221
--- /dev/null
+++ b/dict_comprehensions.py
@@ -0,0 +1,54 @@
+# Python Dictionary Comprehensions
+# (c) Joe James 2023
+
+# 1. math function to compute values using list
+dict1 = {x: 2*x for x in [0, 2, 4, 6]}
+print ('1. ', dict1)
+
+# 2. math function to compute values using range
+dict2 = {x: x**2 for x in range(0, 7, 2)}
+print ('2. ', dict2)
+
+# 3. from chars in a string
+dict3 = {x: ord(x) for x in 'Kumar'}
+print ('3. ', dict3)
+
+# 4. given lists of keys & values
+x = ['Aditii', 'Brandon', 'Clumley', 'Magomed', 'Rishi']
+y = [1, 2, 3, 13, 18]
+dict4 = {i: j for (i,j) in zip(x,y)}
+print ('4. ', dict4)
+
+# 5. from chars in a string
+x = "python"
+dict5 = {i: 3*i.upper() for i in x}
+print('5. ', dict5)
+
+# 6. list comprehension for the value
+x = [2, 4, 6, 8]
+y = [5, 10, 15, 20]
+dict6 = {i: [i + 2*j for j in y] for i in x}
+print('6. ', dict6)
+
+#7. using items
+x = {'A':10, 'B':20, 'C':30}
+dict7 = {i: j*2 for (i,j) in x.items()}
+print('7. ', dict7)
+
+# 8. conditional comprehension
+dict8 = {i: i**3 for i in range(10) if i%2 == 0}
+print('8. ', dict8)
+
+# 9. if-else conditional comprehension
+x = {'A':10, 'B':20, 'C':30}
+dict9 = {i: (j if j < 15 else j+100) for (i,j) in x.items()}
+print('9. ', dict9)
+
+# 10. transformation from an existing dict
+x = {'A':10, 'B':20, 'C':30}
+dict10 = {i: x[i]+1 for i in x}
+print('10. ', dict10)
+
+
+
+
diff --git a/exception-handling.py b/exception-handling.py
index 8dd489f1..57ddf118 100644
--- a/exception-handling.py
+++ b/exception-handling.py
@@ -1,3 +1,20 @@
+# something more about try except
+# basic syntax
+'''
+try:
+ code1
+
+except:
+ some code that will execute if code 1 fails or raise some error
+
+else:
+ this code is executed only if try was succesful i.e no error in code1
+
+finally:
+
+ this code will execute in every situation if try fails or not
+'''
+
filename = 'exception_data.txt'
# Outer try block catches file name or file doesn't exist errors.
try:
@@ -28,4 +45,22 @@ def this_fails():
try:
this_fails()
except ZeroDivisionError as err:
- print('Handling run-time error:', err)
\ No newline at end of file
+ print('Handling run-time error:', err)
+
+
+def divide_me(n):
+ x = 1/n
+
+i = int(input('enter a number '))
+try:
+ divide_me(i)
+
+except Exception as e:
+ print(e) # this will print the error msg but don't kill the execution of program
+
+else:
+ print('Your Code Run Successfully') # this will execute if divide_me(i) run sucessfully without an error
+
+finally:
+ print('thanks') # this will execute in every condition
+
diff --git a/factorial.py b/factorial.py
index 2a70c3dc..8e4a65ea 100644
--- a/factorial.py
+++ b/factorial.py
@@ -14,6 +14,6 @@ def get_iterative_factorial(n):
for i in range(1, n+1):
fact *= i
return fact
-
+print("input should be an integer")
print(get_recursive_factorial(6))
-print(get_iterative_factorial(6))
\ No newline at end of file
+print(get_iterative_factorial(6))
diff --git a/flatten_list.py b/flatten_list.py
new file mode 100644
index 00000000..3f3c57df
--- /dev/null
+++ b/flatten_list.py
@@ -0,0 +1,27 @@
+# Python Flatten Nested Lists
+# (c) Joe James 2023
+
+# list comprehension method
+def flatten1 (myList):
+ return [i for j in myList for i in j]
+
+# recursive method
+def flatten2 (myList):
+ flatList = []
+ for item in myList:
+ if isinstance(item, list):
+ flatList.extend(flatten2(item))
+ else:
+ flatList.append(item)
+ return flatList
+
+list1 = [[0], [1, 2], [3, [4, 5]], [6], [7]]
+list2 = [0, [1, 2], [3, [4, 5]], [6], 7]
+
+print("flatten1(list1): ", flatten1(list1)) # works, but only flattens 1 layer of sublists
+# print(flatten1(list2)) # error - can't work with list of ints and sublists of ints
+
+print("flatten2(list1): ", flatten2(list1))
+print("flatten2(list2): ", flatten2(list2))
+
+
diff --git a/graph_adjacency-list.py b/graph_adjacency-list.py
index fec2f958..ebc3f47c 100644
--- a/graph_adjacency-list.py
+++ b/graph_adjacency-list.py
@@ -4,9 +4,9 @@ def __init__(self, n):
self.name = n
self.neighbors = list()
- def add_neighbor(self, v):
+ def add_neighbor(self, v, weight):
if v not in self.neighbors:
- self.neighbors.append(v)
+ self.neighbors.append((v, weight))
self.neighbors.sort()
class Graph:
@@ -19,11 +19,11 @@ def add_vertex(self, vertex):
else:
return False
- def add_edge(self, u, v):
+ def add_edge(self, u, v, weight=0):
if u in self.vertices and v in self.vertices:
# my YouTube video shows a silly for loop here, but this is a much faster way to do it
- self.vertices[u].add_neighbor(v)
- self.vertices[v].add_neighbor(u)
+ self.vertices[u].add_neighbor(v, weight)
+ self.vertices[v].add_neighbor(u, weight)
return True
else:
return False
diff --git a/graph_adjacency-matrix.py b/graph_adjacency-matrix.py
index b6d05589..3f315001 100644
--- a/graph_adjacency-matrix.py
+++ b/graph_adjacency-matrix.py
@@ -1,4 +1,5 @@
# implementation of an undirected graph using Adjacency Matrix, with weighted or unweighted edges
+# its definitely work
class Vertex:
def __init__(self, n):
self.name = n
@@ -46,4 +47,4 @@ def print_graph(self):
for edge in edges:
g.add_edge(edge[:1], edge[1:])
-g.print_graph()
\ No newline at end of file
+g.print_graph()
diff --git a/lcm.py b/lcm.py
index 8d584ab7..a308141e 100644
--- a/lcm.py
+++ b/lcm.py
@@ -1,4 +1,4 @@
-# computes Lowest Common Multiple LCM / Least Common Denominator LCD
+# computes Lowest Common Multiple (LCM) / Least Common Denominator (LCD)
# useful for adding and subtracting fractions
# 2 numbers
@@ -21,4 +21,4 @@ def lcm3(nums):
print(str(lcm(7, 12)))
nums = [3, 2, 16]
-print(str(lcm3(nums)))
\ No newline at end of file
+print(str(lcm3(nums)))
diff --git a/match statements.ipynb b/match statements.ipynb
new file mode 100644
index 00000000..a8fc422d
--- /dev/null
+++ b/match statements.ipynb
@@ -0,0 +1,327 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Python 10 - Structural Pattern Matching\n",
+ "### match statements \n",
+ "Very similar to switch/case statements in C, Java, and Javascript. \n",
+ "Can be used in lieu of if/elif/else blocks. \n",
+ "[documentation](https://www.python.org/dev/peps/pep-0622/)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Can use integer for match variable..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "large\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 3\n",
+ "\n",
+ "match var:\n",
+ " case 1:\n",
+ " print('small')\n",
+ " case 2:\n",
+ " print('medium')\n",
+ " case 3:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### ...or floating point..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "large\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 1.5\n",
+ "\n",
+ "match var:\n",
+ " case 1.3:\n",
+ " print('small')\n",
+ " case 1.4:\n",
+ " print('medium')\n",
+ " case 1.5:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### ...or Tuple...\n",
+ "Note here we also use a variable to receive *any* value."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "on x-axis\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = (8,0)\n",
+ "\n",
+ "match var:\n",
+ " case (0,x):\n",
+ " print('on y-axis')\n",
+ " case (x,0):\n",
+ " print('on x-axis')\n",
+ " case (x,y):\n",
+ " print('not on axis')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### ...or String"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "small\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = \"S\"\n",
+ "\n",
+ "match var:\n",
+ " case \"S\":\n",
+ " print('small')\n",
+ " case \"Med\":\n",
+ " print('medium')\n",
+ " case \"Lg\":\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### The Default case _ \n",
+ "The default case, using underscore, is optional. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "large\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 4\n",
+ "\n",
+ "match var:\n",
+ " case 1:\n",
+ " print('small')\n",
+ " case 2:\n",
+ " print('medium')\n",
+ " case _:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Conditionals in case \n",
+ "*or* conditions (using bar) are supported in case statements."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "small\n"
+ ]
+ }
+ ],
+ "source": [
+ "var = 2\n",
+ "\n",
+ "match var:\n",
+ " case 2 | 3:\n",
+ " print('small')\n",
+ " case 4 | 5 | 6:\n",
+ " print('medium')\n",
+ " case _:\n",
+ " print('large')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### No breaks needed\n",
+ "*if* statements are supported, but must follow syntax, case var if (inequality expression). \n",
+ "\n",
+ "Note that you do not need break statements. The match block will automatically end execution after one case is executed."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "A\n",
+ "F\n"
+ ]
+ }
+ ],
+ "source": [
+ "def print_grade(score):\n",
+ " match score:\n",
+ " # case score > 90 this does not work!\n",
+ " case score if score >= 90:\n",
+ " print('A')\n",
+ " case score if score >= 80:\n",
+ " print('B')\n",
+ " case score if score >= 70:\n",
+ " print('C')\n",
+ " case score if score >= 60:\n",
+ " print('D')\n",
+ " case _:\n",
+ " print('F')\n",
+ " \n",
+ "print_grade(94)\n",
+ "print_grade(48)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Python Objects \n",
+ "Match statements can also use Python objects and instance variables. \n",
+ "In the final case here we could have used _ default case, but instead used x so that we could use the value of x in our print statement."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "medium\n",
+ "Size XL is not recognized.\n"
+ ]
+ }
+ ],
+ "source": [
+ "class T_shirt:\n",
+ " def __init__(self, s):\n",
+ " self.size = s\n",
+ "\n",
+ " def order(self):\n",
+ " match self.size:\n",
+ " case 'S' | 'Sm':\n",
+ " print('small')\n",
+ " case 'M' | 'Med':\n",
+ " print('medium')\n",
+ " case 'L' | 'Lg':\n",
+ " print('large')\n",
+ " case x:\n",
+ " print(f'Size {x} is not recognized.')\n",
+ " \n",
+ "shirt1 = T_shirt('Med')\n",
+ "shirt1.order()\n",
+ "\n",
+ "shirt2 = T_shirt('XL')\n",
+ "shirt2.order()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/python oriented programming b/python oriented programming
new file mode 100644
index 00000000..8e3a4499
--- /dev/null
+++ b/python oriented programming
@@ -0,0 +1,34 @@
+class Mobile:
+ def make_call(self):
+ print("i am making a call")
+ def play_game(self):
+ print("i am playing games")
+
+m1=Mobile()
+
+m1.make_call()
+
+m1.play_game()
+
+class Mobile:
+ def set_color(self,color):
+ self.color=color
+ def set_cost(self,cost):
+ self.cost=cost
+ def show_color(self):
+ print("black")
+ def show_price(self):
+ print("5000")
+ def make_call(self):
+ print("i am making a call")
+ def play_game(self):
+ print("i am playing games")
+
+
+
+m2=Mobile()
+
+m2.show_price()
+
+m2.show_color()
+
diff --git a/remove_from_list.py b/remove_from_list.py
new file mode 100644
index 00000000..9619664f
--- /dev/null
+++ b/remove_from_list.py
@@ -0,0 +1,48 @@
+# Python: del vs pop vs remove from a list
+# (c) Joe James 2023
+
+def get_dogs():
+ return ['Fido', 'Rover', 'Spot', 'Duke', 'Chip', 'Spot']
+
+dogs = get_dogs()
+print(dogs)
+
+# Use pop() to remove last item or an item by index and get the returned value.
+print('1. pop last item from list:')
+myDog = dogs.pop()
+print(myDog, dogs)
+
+dogs = get_dogs()
+print('2. pop item with index 1:')
+myDog = dogs.pop(1)
+print(myDog, dogs)
+
+# Use remove() to delete an item by value. (raises ValueError if value not found)
+dogs = get_dogs()
+print('3. remove first Spot from list:')
+dogs.remove('Spot')
+print(dogs)
+
+# Use del to remove an item or range of items by index. Or delete entire list.
+dogs = get_dogs()
+print('4. del item with index 3:')
+del(dogs[3])
+print(dogs)
+
+dogs = get_dogs()
+print('5. del items [1:3] from list:')
+del(dogs[1:3])
+print(dogs)
+
+dogs = get_dogs()
+print('6. del entire list:')
+del(dogs)
+print(dogs)
+
+
+
+
+
+
+
+