-
Notifications
You must be signed in to change notification settings - Fork 0
Sourcery refactored master branch #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
|
||
| x=resp.json() | ||
| j = json.loads(x) | ||
| d = dict(j) | ||
|
|
||
| for k,v in (d.items()): | ||
| print("{}: {}".format(k,round(v,2))) | ||
| print(f"{k}: {round(v, 2)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 30-36 refactored with the following changes:
- Replace call to format with f-string (
use-fstring-for-formatting)
| clf = 'lm_model_v1.pk' | ||
|
|
||
| if test.empty: | ||
| return(bad_request()) | ||
| else: | ||
| #Load the saved model | ||
| print("Loading the model...") | ||
| loaded_model = None | ||
| with open('./models/'+clf,'rb') as f: | ||
| loaded_model = pickle.load(f) | ||
|
|
||
| print("The model has been loaded...doing predictions now...") | ||
| print() | ||
| predictions = loaded_model.predict(test) | ||
|
|
||
| prediction_series = pd.Series(predictions) | ||
| response = jsonify(prediction_series.to_json()) | ||
| response.status_code = 200 | ||
| return (response) | ||
| #Load the saved model | ||
| print("Loading the model...") | ||
| loaded_model = None | ||
| clf = 'lm_model_v1.pk' | ||
|
|
||
| with open(f'./models/{clf}', 'rb') as f: | ||
| loaded_model = pickle.load(f) | ||
|
|
||
| print("The model has been loaded...doing predictions now...") | ||
| print() | ||
| predictions = loaded_model.predict(test) | ||
|
|
||
| prediction_series = pd.Series(predictions) | ||
| response = jsonify(prediction_series.to_json()) | ||
| response.status_code = 200 | ||
| return (response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function apicall refactored with the following changes:
- Move assignments closer to their usage (
move-assign) - Remove unnecessary else after guard condition (
remove-unnecessary-else) - Use f-string instead of string concatenation (
use-fstring-for-concatenation)
| import pandas as pd | ||
|
|
||
| import os | ||
| import os |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 4-97 refactored with the following changes:
- Use f-string instead of string concatenation [×5] (
use-fstring-for-concatenation) - Replace call to format with f-string (
use-fstring-for-formatting) - Hoist repeated code outside conditional statement (
hoist-statement-from-if) - Replace a[0:x] with a[:x] and a[x:len(a)] with a[x:] [×2] (
remove-redundant-slice-index) - Hoist nested repeated code outside conditional statements (
hoist-similar-statement-from-if)
| # Keep adding new words | ||
| for i in range(new_words): | ||
|
|
||
| for _ in range(new_words): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function generate_random_start refactored with the following changes:
- Replace unused for index with underscore (
for-index-underscore) - Convert for loop into list comprehension [×2] (
list-comprehension) - Inline variable that is immediately returned (
inline-immediately-returned-variable) - Use f-string instead of string concatenation [×6] (
use-fstring-for-concatenation)
This removes the following comments ( why? ):
#return f"<div>{seed_html}</div><div>{gen_html}</div><div>{a_html}</div>"
# Showing generated and actual abstract
| word_idx = json.load(open('data/word-index.json')) | ||
| word_idx = json.load(open('data/word-index.json')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function generate_from_seed refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable) - Use f-string instead of string concatenation [×2] (
use-fstring-for-concatenation)
| for i in range(len(y)): | ||
| for _ in range(len(y)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function flip refactored with the following changes:
- Replace unused for index with underscore (
for-index-underscore)
| if m==None: | ||
|
|
||
| if m is None: | ||
| m='' | ||
| for i in range(1,n_features+1): | ||
| c='x'+str(i) | ||
| c = f'x{str(i)}' | ||
| c+=np.random.choice(['+','-'],p=[0.5,0.5]) | ||
| m+=c | ||
| m=m[:-1] | ||
| sym_m=sympify(m) | ||
| n_features=len(sym_m.atoms(Symbol)) | ||
| evals=[] | ||
| lst_features=[] | ||
| for i in range(n_features): | ||
| lst_features.append(np.random.normal(scale=5,size=n_samples)) | ||
| lst_features = [ | ||
| np.random.normal(scale=5, size=n_samples) for _ in range(n_features) | ||
| ] | ||
| lst_features=np.array(lst_features) | ||
| lst_features=lst_features.T | ||
| for i in range(n_samples): | ||
| evals.append(eval_multinomial(m,vals=list(lst_features[i]))) | ||
|
|
||
| evals = [ | ||
| eval_multinomial(m, vals=list(lst_features[i])) | ||
| for i in range(n_samples) | ||
| ] | ||
| evals=np.array(evals) | ||
| evals_binary=evals>0 | ||
| evals_binary=evals_binary.flatten() | ||
| evals_binary=np.array(evals_binary,dtype=int) | ||
| evals_binary=flip(evals_binary,p=flip_y) | ||
| evals_binary=evals_binary.reshape(n_samples,1) | ||
|
|
||
| lst_features=lst_features.reshape(n_samples,n_features) | ||
| x=np.hstack((lst_features,evals_binary)) | ||
|
|
||
| return (x) | ||
| return np.hstack((lst_features,evals_binary)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function gen_classification_symbolic refactored with the following changes:
- Use x is None rather than x == None (
none-compare) - Convert for loop into list comprehension [×2] (
list-comprehension) - Replace unused for index with underscore (
for-index-underscore) - Inline variable that is immediately returned (
inline-immediately-returned-variable) - Move assignment closer to its usage within a block (
move-assign-in-block) - Use f-string instead of string concatenation (
use-fstring-for-concatenation)
| if m==None: | ||
|
|
||
| if m is None: | ||
| m='' | ||
| for i in range(1,n_features+1): | ||
| c='x'+str(i) | ||
| c = f'x{str(i)}' | ||
| c+=np.random.choice(['+','-'],p=[0.5,0.5]) | ||
| m+=c | ||
| m=m[:-1] | ||
|
|
||
| sym_m=sympify(m) | ||
| n_features=len(sym_m.atoms(Symbol)) | ||
| evals=[] | ||
| lst_features=[] | ||
|
|
||
| for i in range(n_features): | ||
| lst_features.append(np.random.normal(scale=5,size=n_samples)) | ||
| lst_features = [ | ||
| np.random.normal(scale=5, size=n_samples) for _ in range(n_features) | ||
| ] | ||
| lst_features=np.array(lst_features) | ||
| lst_features=lst_features.T | ||
| lst_features=lst_features.reshape(n_samples,n_features) | ||
|
|
||
| for i in range(n_samples): | ||
| evals.append(eval_multinomial(m,vals=list(lst_features[i]))) | ||
|
|
||
|
|
||
| evals = [ | ||
| eval_multinomial(m, vals=list(lst_features[i])) | ||
| for i in range(n_samples) | ||
| ] | ||
| evals=np.array(evals) | ||
| evals=evals.reshape(n_samples,1) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function gen_regression_symbolic refactored with the following changes:
- Use x is None rather than x == None (
none-compare) - Convert for loop into list comprehension [×2] (
list-comprehension) - Replace unused for index with underscore (
for-index-underscore) - Inline variable that is immediately returned (
inline-immediately-returned-variable) - Move assignment closer to its usage within a block (
move-assign-in-block) - Use f-string instead of string concatenation (
use-fstring-for-concatenation)
| df = pd.DataFrame(np.random.normal(loc=5, | ||
| scale=5, size=50).reshape(10, 5), | ||
| columns = ['A'+ str(i) for i in range(1, 6)]) | ||
| df = pd.DataFrame( | ||
| np.random.normal(loc=5, scale=5, size=50).reshape(10, 5), | ||
| columns=[f'A{str(i)}' for i in range(1, 6)], | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 280-317 refactored with the following changes:
- Use f-string instead of string concatenation (
use-fstring-for-concatenation) - Simplify comparison to string length [×2] (
simplify-str-len-comparison)
| ``` | ||
| x = st.slider('x', -8, 8) | ||
| """ | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 414-414 refactored with the following changes:
- Use f-string instead of string concatenation (
use-fstring-for-concatenation)
| s3=sympify(s2) | ||
|
|
||
| return(s3) | ||
| return sympify(s2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function symbolize refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| sym_lst=[] | ||
| for s in sym_set: | ||
| sym_lst.append(str(s)) | ||
| sym_lst = [str(s) for s in sym_set] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function eval_multinomial refactored with the following changes:
- Convert for loop into list comprehension [×2] (
list-comprehension) - Remove an unnecessary list construction call prior to sorting (
skip-sorted-list-construction)
| for i in range(len(y)): | ||
| for _ in range(len(y)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function flip refactored with the following changes:
- Replace unused for index with underscore (
for-index-underscore)
| if m==None: | ||
| if m is None: | ||
| m='' | ||
| for i in range(1,n_features+1): | ||
| c='x'+str(i) | ||
| c = f'x{str(i)}' | ||
| c+=np.random.choice(['+','-'],p=[0.5,0.5]) | ||
| m+=c | ||
| m=m[:-1] | ||
| sym_m=sympify(m) | ||
| n_features=len(sym_m.atoms(Symbol)) | ||
| evals=[] | ||
| lst_features=[] | ||
| for i in range(n_features): | ||
| lst_features.append(np.random.normal(scale=5,size=n_samples)) | ||
| lst_features = [ | ||
| np.random.normal(scale=5, size=n_samples) for _ in range(n_features) | ||
| ] | ||
| lst_features=np.array(lst_features) | ||
| lst_features=lst_features.T | ||
| for i in range(n_samples): | ||
| evals.append(eval_multinomial(m,vals=list(lst_features[i]))) | ||
|
|
||
| evals = [ | ||
| eval_multinomial(m, vals=list(lst_features[i])) | ||
| for i in range(n_samples) | ||
| ] | ||
| evals=np.array(evals) | ||
| evals_binary=evals>0 | ||
| evals_binary=evals_binary.flatten() | ||
| evals_binary=np.array(evals_binary,dtype=int) | ||
| evals_binary=flip(evals_binary,p=flip_y) | ||
| evals_binary=evals_binary.reshape(n_samples,1) | ||
|
|
||
| lst_features=lst_features.reshape(n_samples,n_features) | ||
| x=np.hstack((lst_features,evals_binary)) | ||
|
|
||
| return (x) | ||
| return np.hstack((lst_features,evals_binary)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function gen_classification_symbolic refactored with the following changes:
- Use x is None rather than x == None (
none-compare) - Convert for loop into list comprehension [×2] (
list-comprehension) - Replace unused for index with underscore (
for-index-underscore) - Inline variable that is immediately returned (
inline-immediately-returned-variable) - Move assignment closer to its usage within a block (
move-assign-in-block) - Use f-string instead of string concatenation (
use-fstring-for-concatenation)
| if m==None: | ||
| if m is None: | ||
| m='' | ||
| for i in range(1,n_features+1): | ||
| c='x'+str(i) | ||
| c = f'x{str(i)}' | ||
| c+=np.random.choice(['+','-'],p=[0.5,0.5]) | ||
| m+=c | ||
| m=m[:-1] | ||
|
|
||
| sym_m=sympify(m) | ||
| n_features=len(sym_m.atoms(Symbol)) | ||
| evals=[] | ||
| lst_features=[] | ||
|
|
||
| for i in range(n_features): | ||
| lst_features.append(np.random.normal(scale=5,size=n_samples)) | ||
| lst_features = [ | ||
| np.random.normal(scale=5, size=n_samples) for _ in range(n_features) | ||
| ] | ||
| lst_features=np.array(lst_features) | ||
| lst_features=lst_features.T | ||
| lst_features=lst_features.reshape(n_samples,n_features) | ||
|
|
||
| for i in range(n_samples): | ||
| evals.append(eval_multinomial(m,vals=list(lst_features[i]))) | ||
|
|
||
|
|
||
| evals = [ | ||
| eval_multinomial(m, vals=list(lst_features[i])) | ||
| for i in range(n_samples) | ||
| ] | ||
| evals=np.array(evals) | ||
| evals=evals.reshape(n_samples,1) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function gen_regression_symbolic refactored with the following changes:
- Use x is None rather than x == None (
none-compare) - Convert for loop into list comprehension [×2] (
list-comprehension) - Replace unused for index with underscore (
for-index-underscore) - Inline variable that is immediately returned (
inline-immediately-returned-variable) - Move assignment closer to its usage within a block (
move-assign-in-block) - Use f-string instead of string concatenation (
use-fstring-for-concatenation)
Branch
masterrefactored by Sourcery.If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.
See our documentation here.
Run Sourcery locally
Reduce the feedback loop during development by using the Sourcery editor plugin:
Review changes via command line
To manually merge these changes, make sure you're on the
masterbranch, then run:Help us improve this pull request!