Data-Science-Regular-Bootcamp
Data-Science-Regular-Bootcamp copied to clipboard
Regular practice on Data Science, Machien Learning, Deep Learning, Solving ML Project problem, Analytical Issue. Regular boost up my knowledge. The goal is to help learner with learning resource on Da...
name = "Datacamp" ## displaying single quotations print(f"Hello, \'{name}\'") print() ## displaying double quotations print(f"Hello, \"{name}\"")
Before feeding word sequences into BERT, 15% of the words in each sequence are replaced with a [MASK] token. The model then attempts to predict the original value of the...
p_test = pd.read_csv('TrainSA.csv') p_test.SentimentText=p_test.SentimentText.astype(str)
data = ["Project Gutenberg’s", "Alice’s Adventures in Wonderland", "Project Gutenberg’s", "Adventures in Wonderland", "Project Gutenberg’s"] rdd=spark.sparkContext.parallelize(data) for element in rdd.collect(): print(element)
PySpark When Otherwise and SQL Case When on DataFrame with Examples – Similar to SQL and programming languages, PySpark supports a way to check multiple conditions in sequence and returns...
RDD (Resilient Distributed Dataset) is a fundamental building block of PySpark which is fault-tolerant, immutable distributed collections of objects. Immutable meaning once you create an RDD you cannot change it....
import urllib.request, base64 u = urllib.request.urlopen(currentWeatherIconURL) raw_data = u.read() u.close() b64_data = base64.encodestring(raw_data) image = PhotoImage(data=b64_data) label = Label(image=image, bg="White") label.pack()
Rule-based chatbots also referred to as decision-tree bots, use a series of defined rules. These rules are the basis for the types of problems the chatbot is familiar with and...
Ensemble methods are techniques that create multiple models and then combine them to produce improved results. Ensemble methods usually produces more accurate solutions than a single model would. This has...
!apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget -q https://downloads.apache.org/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz !tar xf spark-2.4.8-bin-hadoop2.7.tgz !pip install -q findspark