CURRICULUM FOR SESSION 2025-2026
JOB ROLE: Al Assistant | CLASS – XII
Artificial Intelligence (AI) is a transformative field in computer science that focuses on creating intelligent systems capable of learning, adapting, and self-improving. These systems can process vast amounts of data with remarkable speed and accuracy, surpassing human capabilities in many domains. AI's impact extends across disciplines, offering innovative solutions to some of the world's most pressing challenges. From revolutionizing healthcare with advanced diagnostics and personalized treatments to enhancing agricultural practices and ensuring food security, AI has the power to reshape industries. It can improve access to quality education, and play a pivotal role in protecting and restoring our planet's ecosystems by cleaning our oceans, air, and water. The possibilities for leveraging AI to create a better future are boundless, provided we harness its potential responsibly and ethically.
By the end of this course, students will:
This course follows a structured sequence of instructional units designed to develop employability and vocational skills among students. These units are carefully crafted to integrate seamlessly with other educational subjects, fostering a holistic learning experience.
Detailed scheme is available in the "Scheme of Units" tab.
NOTE: The detailed curriculum/ topics to be covered under Part A: Employability Skills can be downloaded from the CBSE website.
| Units | NO. OF HOURS | MAX MARKS |
|---|---|---|
| Unit 1: Communication Skills-IV | 15 | 2 |
| Unit 2: Self-Management Skills-IV | 10 | 2 |
| Unit 3: ICT Skills-IV | 15 | 2 |
| Unit 4: Entrepreneurial Skills-IV | 10 | 2 |
| Unit 5: Green Skills-IV | 10 | 2 |
| TOTAL (Part A) | 60 | 10 |
| Units | HOURS | MAX MARKS | |
|---|---|---|---|
| Th. | Prac. | ||
| Unit 1: Python Programming – II* | 6 | 18 | (*to be evaluated in practicals only) |
| Unit 2: Data Science Methodology: An Analytic Approach to Capstone Project | 8 | 12 | 8 |
| Unit 3: Making Machines See | 6 | 12 | 6 |
| Unit 4: AI with Orange Data Mining Tool* | 4 | 18 | (*to be evaluated in practicals only) |
| Unit 5: Introduction to Big Data and Data Analytics | 7 | 12 | 6 |
| Unit 6: Understanding Neural Networks | 8 | 12 | 8 |
| Unit 7: Generative AI | 6 | 12 | 7 |
| Unit 8: Data Storytelling | 5 | 4 | 5 |
| TOTAL (Part B) | 50 | 100 | 40 |
NOTE: *marked units/portion is to be evaluated in practicals only)
Detailed breakdown is available in the "Practicals" tab.
| Description | MAX MARKS |
|---|---|
| Capstone Project + Project Documentation (Includes Project, Documentation, Video) | 25 |
| Practical File | 10 |
| Lab Test (Python and Orange Data Mining) | 10 |
| Viva Voce (based on Capstone Project + Practical File) | 5 |
| TOTAL (Part C) | 50 |
DETAILED CURRICULUM/TOPICS FOR CLASS XII - Part-B
import pandas as pd
# Creation of a Series from Scalar Values
series1 = pd.Series([10,20,30])
print("Array with Rank 1: \n",series1)
# Creation of a DataFrame from NumPy arrays
import numpy as np
array1=np.array([90,100,110,120])
array2=np.array([50,60,70,80]) # Added 80 based on next usage context
array3=np.array([10,20,30,40])
marksDF = pd.DataFrame([array1, array2, array3], columns=[ 'A', 'B', 'C', 'D'])
print(marksDF)
# Creation of a DataFrame from dictionary of array/lists
data = {'Name':['Varun', 'Ganesh', 'Joseph', 'Abdul','Reena'],
'Age':[37,30,38, 39,40]}
df = pd.DataFrame(data)
print(df)
# Creation of DataFrame from List of Dictionaries
listDict = [{'a':10, 'b':20}, {'a':5,'b':10,'c':20}]
df_list = pd.DataFrame(listDict)
print(df_list)
# Assuming ResultSheet DataFrame is defined as in PDF
ResultSheet={'Rajat': pd.Series([90, 91, 97],index=['Maths','Science','Hindi']),
'Amrita': pd.Series([92, 81, 96],index=['Maths','Science','Hindi']),
'Meenakshi': pd.Series([89, 91, 88],index=['Maths','Science','Hindi']),
'Rose': pd.Series([81, 71, 67],index=['Maths','Science','Hindi']),
'Karthika': pd.Series([94, 95, 99],index=['Maths','Science','Hindi'])}
Result = pd.DataFrame(ResultSheet)
print("Original DataFrame:")
print(Result)
# Add a new column
Result['Fathima']=[89,78,76]
print("\nAfter adding Fathima column:")
print(Result)
# Add a new row using .loc (example might not make sense data-wise but shows method)
Result.loc['English'] = [90, 92, 89, 80, 90, 88] # Assumes all students have English marks
print("\nAfter adding English row:")
print(Result)
# Change value using .loc
Result.loc['Science'] = [92, 84, 90, 72, 96, 88] # Example given for Science
print("\nAfter changing Science row values:")
print(Result)
# Delete a row
result_after_drop_row = Result.drop('Hindi', axis=0)
print("\nAfter deleting Hindi row:")
print(result_after_drop_row)
# Delete multiple columns
result_after_drop_cols = Result.drop(['Rajat','Meenakshi','Karthika'], axis=1)
print("\nAfter deleting Rajat, Meenakshi, Karthika columns:")
print(result_after_drop_cols)
import pandas as pd
import numpy as np # Needed for np.NaN
# Case Study Data (Based on PDF p.15)
ResultSheet_missing={'Maths': pd.Series([90,91,97,89,65,93],
index=['Heena','Shefali','Meera','Joseph','Suhana','Bismeet']),
'Science':pd.Series([92,81,np.NaN,87,50,88], # Meera's Science missing
index=['Heena','Shefali','Meera','Joseph','Suhana','Bismeet']),
'English': pd.Series([89, 91, 88,78,77,82],
index=['Heena','Shefali','Meera','Joseph','Suhana','Bismeet']),
'Hindi': pd.Series([81, 71, 67,82,np.NaN,89], # Suhana's Hindi missing
index=['Heena','Shefali','Meera','Joseph','Suhana','Bismeet']),
'AI': pd.Series([94, 95, 99,np.NaN,96,99], # Joseph's AI missing
index=['Heena','Shefali','Meera','Joseph','Suhana','Bismeet'])}
marks = pd.DataFrame(ResultSheet_missing)
print("DataFrame with Missing Values:")
print(marks)
# Check for missing values
print("\nIs null?:")
print(marks.isnull())
print("\nIs there any null in 'Science' column?:")
print(marks['Science'].isnull().any())
print("\nIs there any null in 'Maths' column?:")
print(marks['Maths'].isnull().any())
# To find the total number of NaN in the whole dataset
print("\nTotal number of missing values:", marks.isnull().sum().sum()) # sum() along columns first, then sum that result
# Apply dropna() (Drops rows with any NaN)
print("\nDataFrame after dropna() (removes rows with NaN):")
drop=marks.dropna()
print(drop)
# Estimate missing value using fillna(0)
print("\nDataFrame after fillna(0) (fills NaN with 0):")
FillZero = marks.fillna(0)
print(FillZero)
# Estimate missing value using mean (A more common strategy)
# print("\nDataFrame after fillna(mean):")
# FillMean = marks.fillna(marks.mean()) # Example of filling with mean
# print(FillMean)
**For Advanced Learners**: Implementation of Linear Regression on a sample dataset, including steps for loading, splitting data, fitting the model, and evaluating (MSE/RMSE). (Content from PDF pages 17-19 is too long for a simple pre block, would require breaking down into steps/code blocks or external linking/download). *Placeholder text.*
Advanced Learners would work with code like this to implement Linear Regression:
# Sample Linear Regression Code Structure (Concept from PDF p.17-19)
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error # Used for evaluation
# Load data (Assuming 'USA_Housing.csv' is available locally or linked)
# df = pd.read_csv('USA_Housing.csv')
# Separate features (X) and target (y)
# Assuming df is loaded and columns 'Price' exists for target
# X = df.drop('Price', axis=1) # Example features
# y = df['Price'] # Target
# Split data into training and testing sets (e.g., 80% train, 20% test)
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=101)
# Create and train the model
# model = LinearRegression()
# model.fit(X_train, y_train)
# Make predictions
# predictions = model.predict(X_test)
# Evaluate the model (e.g., calculate Mean Squared Error)
# mse = mean_squared_error(y_test, predictions)
# print(f"Mean Squared Error: {mse}")
The Data Science Methodology put forward by John B. Rollins, a Data Scientist in IBM Analytics, is discussed here. The major steps involved in practicing Data Science, from forming a concrete business or research problem, to collecting and analyzing data, to building a model, and understanding the feedback, after model deployment are detailed here. Students can develop their Capstone Project based on this methodology.
Students will be able to -
A Methodology gives the Data Scientist a framework for designing an AI Project. The framework will help the team to decide on the methods, processes and strategies that will be employed to obtain the correct output required from the AI Project. It is the best way to organize the entire project and finish it in a systematic way without losing time and cost.
Data Science Methodology is a process with a prescribed sequence of iterative steps that data scientists follow to approach a problem and find a solution. Data Science Methodology enables the capacity to handle and comprehend the data.
In this unit, we discuss the steps of Data Science Methodology... It consists of 10 steps... There are five modules, each going through two stages of the methodology, explaining the rationale as to why each stage is required.
...[Discussion of steps like Business Understanding, Analytic Approach, Data Requirements, Data Collection, Data Understanding, Data Preparation, AI Modelling, Evaluation, Deployment, Feedback as detailed in the PDF pp. 24-30]...
Visual aids from the PDF describing the methodology and types of analytics are omitted here. Refer to the PDF for diagrams.
Evaluating the performance of a trained machine learning model is essential. Model Validation offers a systematic approach to measure its accuracy and reliability, providing insights into how well it generalizes to new, unseen data.
Model validation is the step conducted post Model Training, wherein the effectiveness of the trained model is assessed using a testing dataset... The benefits of Model Validation include Enhancing the model quality, Reduced risk of errors, Prevents the model from overfitting and underfitting.
Commonly used Validation techniques are Train-test split, K-Fold Cross Validation, Leave One out Cross Validation, Time Series Cross Validation etc.
The train-test split is a technique for evaluating the performance of a machine learning algorithm... The procedure involves taking a dataset and dividing it into two subsets... (Training Dataset and Test Dataset)... [Details and common percentages 80/20, 70/30, 67/33]
Cross Validation is a technique used to evaluate a model’s performance. It splits the data into multiple parts or folds... k-fold cross validation: work with k subsets... [Explanation based on k=5 example]. Cross-validation gives a more accurate measure of model quality, especially important if you are making a lot of modelling decisions.
...[Table comparing Train-Test Split and Cross Validation is omitted here. Refer to PDF p.39].
Evaluation metrics help assess the performance of a trained model on a test dataset, providing insights into its strengths and weaknesses. These metrics enable comparison of different models...
In classification problems, the target variable is finite, in regression problems, it's continuous. Different metrics for each.
...[Sample calculations and formulas for metrics are included in the PDF p. 41-42 and omitted here. Refer to the PDF].
Include activities such as calculating MSE/RMSE in MS Excel, calculating Classification metrics from a Confusion Matrix, and Python code for Model Evaluation. (See corresponding section in "Practicals" tab for exercise questions and Python code examples).
...[Content from PDF p.49-70 details the Introduction, Working, Process stages (Image Acquisition, Preprocessing - Noise Reduction, Normalization, Resizing, Histogram Equalization, Feature Extraction - Edge/Corner/Texture/Color based, Detection/Segmentation - Single/Multiple Object Tasks, Classification, Localization, Image Segmentation, High-Level Processing), Applications, and Challenges/Future of Computer Vision.]...
Topics include:
...[Detailed steps and concepts on image digitization and representation using pixel values based on PDF p. 51-54 are described].
# Concept: Image representation based on 0s and 1s (Simplified)
# In reality, uses numbers from 0-255 for grayscale pixels
# A basic idea is to convert a grayscale image to a simple binary representation
from PIL import Image # Requires Pillow library: pip install Pillow
def image_to_binary_art(image_path, threshold=128):
try:
img = Image.open(image_path).convert('L') # Open image and convert to grayscale ('L' mode)
pixels = img.load()
width, height = img.size
binary_repr = []
for y in range(height):
row = []
for x in range(width):
# Convert pixel value to 0 or 1 based on threshold
if pixels[x, y] > threshold:
row.append('1') # Represents a light area (closer to white)
else:
row.append('0') # Represents a dark area (closer to black)
binary_repr.append(''.join(row))
for row in binary_repr:
print(row) # Print as lines of 0s and 1s
except FileNotFoundError:
print(f"Error: Image file not found at {image_path}")
except Exception as e:
print(f"An error occurred: {e}")
# Example Usage (Requires a local image file named 'sample_image.jpg')
# image_to_binary_art('sample_image.jpg', threshold=128)
...[Steps for using Teachable Machine (teachablemachine.withgoogle.com) to train an image classification model and embed it into a website are outlined in PDF p.62-65].
This involves using online tools rather than writing direct Python code for the training and website deployment steps.
...[Details on Introduction, Loading/Displaying, Resizing, Converting to Grayscale based on PDF p.66-68]
# Sample OpenCV Code (Concepts from PDF p.67-68)
import cv2
# import numpy as np # Often used with OpenCV for image arrays
# Assuming you have an image file named 'example.jpg'
image_path = 'example.jpg' # *** Ensure this path is correct ***
# 3.7.2. Loading and Displaying an Image
try:
image = cv2.imread(image_path) # Loads image as a NumPy array
if image is None:
print(f"Error: Image not found or cannot be loaded from {image_path}")
else:
cv2.imshow('Original Image', image)
print(f"Image shape: {image.shape} (height, width, channels)")
# 3.7.3. Resizing an image
new_width = 300
new_height = 300
# resize_img = cv2.resize(image, (new_width, new_height)) # Resize to fixed size
# cv2.imshow('Resized Image', resize_img)
# print(f"Resized Image shape: {resize_img.shape}")
# Resize preserving aspect ratio (More advanced)
# height, width = image.shape[:2]
# aspect_ratio = width / height
# if width > height:
# new_width_auto = 300
# new_height_auto = int(new_width_auto / aspect_ratio)
# else:
# new_height_auto = 300
# new_width_auto = int(new_height_auto * aspect_ratio)
# resized_img_auto = cv2.resize(image, (new_width_auto, new_height_auto))
# cv2.imshow('Resized (Aspect Ratio Preserved)', resized_img_auto)
# print(f"Resized (Aspect Ratio Preserved) shape: {resized_img_auto.shape}")
# 3.7.4. Converting an Image to Grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert BGR to Grayscale
cv2.imshow('Grayscale Image', gray_image)
print(f"Grayscale Image shape: {gray_image.shape}")
print("\nDisplaying images. Press any key in the image window to close.")
cv2.waitKey(0) # Wait indefinitely for a key press
finally:
# Clean up any windows created by OpenCV
cv2.destroyAllWindows()
Multiple Choice, Short Answer, Long Answer, and Competency Based questions based on Unit 3 topics. (See corresponding section in "Practicals" tab for exercise questions).
...[Content from PDF p.71-91 covers Introduction to Data Mining, Introduction to Orange tool, Beneficiaries, Getting started, Components (Blank Canva, Widgets, Connectors), Default Widget Catalog (Data, Transform, Visualize, Model, Evaluate, Unsupervised), Key Domains (Data Science, Computer Vision, Natural Language Processing) with Orange, and practical examples using the Orange tool.]...
This unit primarily involves hands-on work using the visual programming interface of Orange Data Mining tool. Python code is less central here, but Orange can integrate with Python scripts.
Steps to visualize and classify the Iris dataset using Orange widgets (File, Data Table, Scatter Plot, Tree, Predictions, Test and Score, Confusion Matrix) are detailed in PDF pp. 77-83. This demonstrates data loading, visualization, model training, and evaluation through Orange's visual interface.
Steps to perform Image Analytics with Orange (Install Add-on, Import Images, Image Embedding, Computing Similarities, Hierarchical Clustering, Visualization) for tasks like image clustering (e.g., Dog and Cat images) are detailed in PDF pp. 84-87.
Steps to perform Text Analysis with Orange (Install Text Add-on, Load/Create Textual Data, Visualize with Corpus Viewer, Visualize Word Frequencies with Word Cloud, Preprocess Text) are detailed in PDF pp. 88-90.
Multiple Choice, Short Answer, and Competency Based questions based on Unit 4 topics, focusing on understanding the Orange tool, its components, widgets, and applications across Data Science, Computer Vision, and NLP. (See corresponding section in "Practicals" tab for exercise questions).
...[Content from PDF p.92-108 covers Definition of Small vs Big Data, Types of Big Data (Structured, Semi-Structured, Unstructured with examples), Advantages and Disadvantages, Characteristics (3Vs: Volume, Velocity, Variety, expanded to 6Vs including Veracity, Value, Variability), Global trends impacting Big Data, Working process (Gather, Process, Clean, Analyze), Mining Data Streams, Future trends (Real-Time Analytics, Advanced Predictive Models, Quantum Computing). Also includes a case study on an OTT Platform].
Topics include:
...[Summary of the case study describing how the platform uses Volume, Velocity, Variety, Veracity, Value, and Variability of user data is in PDF p. 97-98].
The syllabus mentions this as an activity (**to be evaluated in practicals only), implying using Orange for large/complex datasets, although specific step-by-step instructions for Orange in this unit are less detailed than Unit 4 examples. A sample activity involving using the Heart Disease dataset in Orange is outlined in PDF pp. 99-103, covering Gather, Process (Normalization), Clean (Impute Missing Values), Analyze (Logistic Regression, Test and Score).
Multiple Choice, True/False, Short Answer, Long Answer, and Competency Based questions focusing on definitions, characteristics, advantages/disadvantages, and trends related to Big Data and Big Data Analytics. (See corresponding section in "Practicals" tab for exercise questions).
...[Content from PDF p.109-128 covers Introduction to Neural Networks, Parts (Input, Hidden, Output Layers), Components (Neurons, Weights, Activation Functions, Bias, Connections, Learning Rule - Backpropagation, Propagation Functions), Working principle (simple calculation example), Types (Perceptron, FFNN, CNN, RNN, GAN), Future & Impact on Society.]...
Topics include:
Includes using Machine Learning for Kids, implementing a simple Neural Network for Celsius to Fahrenheit conversion using TensorFlow/Keras, and exploring TensorFlow Playground.
...[Steps for using the machinelearningforkids.co.uk platform to train a simple image/text model (e.g., Identify Animals & Birds) and visualize the neural network are outlined in PDF p. 118-119]. This uses a simplified online platform.
...[Python code demonstrating building, training, and evaluating a simple single-layer neural network for this task is in PDF pp. 120-121].
# Simple NN for Celsius to Fahrenheit using TensorFlow/Keras (Based on PDF p.120-121)
import tensorflow as tf
import numpy as np
# import matplotlib.pyplot as plt # Uncomment if you want to plot loss
# Training Data (Example pairs of Celsius and Fahrenheit)
c = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)
f = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float) # Note: PDF uses different values for f
# Create a simple model (single dense layer with 1 neuron)
model = tf.keras.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
# Compile the model (set up for training)
model.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
# Train the model (Fit the data)
# verbose=False stops progress printing during training
history = model.fit(c, f, epochs=500, verbose=False)
print("Finished training the model")
# Make a prediction
print(model.predict(np.array([100.0]))) # Predict Fahrenheit for 100 Celsius
# Compare with actual formula (F = C * 1.8 + 32)
f_by_formula = (100.0 * 1.8) + 32
print(f"Formula result for 100 C: {f_by_formula}")
# Example showing the model summary (optional)
# print(model.summary())
# Example showing loss over epochs (optional - requires matplotlib)
# plt.xlabel('Epoch Number')
# plt.ylabel("Loss Magnitude")
# plt.plot(history.history['loss'])
# plt.show()
...[Python code demonstrating a simple NN with two hidden layers using TensorFlow/Keras based on PDF pp. 121-122]. This shows building a slightly more complex model.
# Simple NN with Hidden Layers for C to F (Based on PDF pp. 121-122)
import tensorflow as tf
import numpy as np
# import pandas as pd # Uncomment if using a CSV
# import matplotlib.pyplot as plt # Uncomment if plotting
# Assuming data is loaded (e.g., from a CSV like 'cel_fah.csv')
# If using a CSV:
# try:
# temp_df = pd.read_csv('cel_fah.csv')
# X_train = temp_df['Celsius'].values # .values converts Pandas Series to NumPy array
# y_train = temp_df['Fahrenheit'].values
# except FileNotFoundError:
# Use static data if CSV not available (Same as Activity 2 data)
X_train = np.array([-40., -10., 0., 8., 15., 22., 38.], dtype=float)
y_train = np.array([-40., 14., 32., 46., 59., 72., 100.], dtype=float)
print("Using static data for demonstration.")
# Define a sequential neural network with hidden layers
model_hidden = tf.keras.Sequential([
tf.keras.layers.Dense(units=32, input_shape=[1]), # First hidden layer
tf.keras.layers.Dense(units=32), # Second hidden layer
tf.keras.layers.Dense(units=1) # Output layer
])
# Compile the model
model_hidden.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))
# Train the model
print("\nTraining model with hidden layers...")
# Use a smaller number of epochs for demonstration, verbose to show progress
history_hidden = model_hidden.fit(X_train, y_train, epochs=20, verbose=True)
print("\nFinished training the model with hidden layers")
# Make a prediction
print("\nPrediction for 100 C using model with hidden layers:")
print(model_hidden.predict(np.array([100.0])))
# Note: This is a simplified example. More complex problems would need more data, epochs, tuning.
...[Description of the tensorflow.org/playground web app for interactive neural network visualization and experimentation based on PDF pp. 123-125]. This is a tool for visual learning, not direct Python coding activity.
Multiple Choice, Short Answer, Long Answer, and Competency Based questions based on Unit 6 topics. (See corresponding section in "Practicals" tab for exercise questions).
...[Content from PDF p.129-151 covers Introduction to Generative AI, Working principle (GANs, VAEs), Generative vs Discriminative models, Applications (Image/Text/Audio/Video Generation), LLM (Large Language Model) overview and Applications, Limitations & Risks, Ethical/Social Implications (Deepfakes, Bias, Plagiarism, Transparency), and hands-on exploration activities.]...
Topics include:
These activities involve using various online Generative AI tools:
...[Python code demonstrating initializing Gemini API, starting a chat session, and interacting based on PDF pp. 145-147].
# Simple Chatbot with Google Gemini API (Based on PDF p.147)
# This requires installing google-generativeai: pip install -q -U google-generativeai
import google.generativeai as genai
import os # To access environment variables or secure key storage
# *** Set your Gemini API Key ***
# You should obtain this from Google AI Studio (aistudio.google.com)
# DO NOT hardcode your key directly in publicly shared code.
# Recommended: Use environment variables or secrets management.
# Example using os.environ:
API_KEY = os.environ.get('GOOGLE_API_KEY')
# Or, for local testing with basic security awareness (Still not ideal for prod):
# Replace 'YOUR_API_KEY' with your actual key string
API_KEY = 'YOUR_API_KEY' # <-- Replace this line with your key or env var loading
if API_KEY == 'YOUR_API_KEY' or API_KEY is None:
print("WARNING: Please replace 'YOUR_API_KEY' with your actual key or set GOOGLE_API_KEY environment variable.")
print("Skipping Gemini API interaction.")
# Exit or handle the absence of the key gracefully
class MockModel: # Create a mock object for demonstration without key
def start_chat(self, history=[]):
return MockChatSession()
class MockChatSession:
def send_message(self, message):
class MockResponse:
text = "Mock Response: Please provide a valid Gemini API key to interact with the actual model."
return MockResponse()
model = MockModel() # Use the mock model if key is missing
else:
# Configure the Gemini API
genai.configure(api_key=API_KEY)
# Choose a model (e.g., 'gemini-pro' for text only)
model = genai.GenerativeModel('gemini-pro')
# Start a chat session
# history can be passed here to resume a previous chat
chat = model.start_chat(history=[])
print("Chatbot initialized. Type 'exit' to end.")
# Simple interaction loop
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
break
try:
# Send message and get response
# stream=True can be used for streaming responses if model supports
response = chat.send_message(user_input)
print(f"Chatbot: {response.text}")
except Exception as e:
print(f"An error occurred during communication: {e}")
# Break if error prevents further interaction
# break
print("Chat session ended.")
Multiple Choice, True/False, Fill in the Blanks, Short Answer, Case Study Analysis, Ethical Dilemma, and Competency Based questions covering all Unit 7 topics. (See corresponding section in "Practicals" tab for exercise questions).
...[Content from PDF p.152-168 covers Introduction, Why it's powerful, Essential elements (Data, Narrative, Visuals) and how they interlink, Narrative structure (Freytag's Pyramid applied to Data Storytelling), Visualizations for different data types (Word Cloud, FacetGrid, Line Graph, Bar Chart, Pie Chart, Scatter Plot, Histogram, Heat Map, Candlestick Chart, Map Chart), Steps to create a story through data, Examples (Spotify, Uber, MDMS dropout rates, Covid Vaccine data), Ethics, Conclusion.]...
Topics include:
Includes descriptions and potential visual references (omitted here) for Spotify, Uber, Mid-Day Meal Scheme impact on dropout rates, and Covid Vaccine data. These serve as illustrations of combining data, visuals, and narrative.
Describes and lists appropriate charts/graphs for Text Data, Mixed Data, and Numeric Data, Stocks data, and Geographic data. Examples of chart types include Word Cloud, FacetGrid, Line Graph, Bar Chart, Pie Chart, Scatter Plot, Histogram, Heat Map, Candlestick Chart, Map Chart.
Simplified steps outlined as: Collect data -> Use visualization tools -> Observe relationships -> Create simple narrative.
Covers important considerations: Accuracy, Transparency, Respect for Privacy.
Multiple Choice, True/False, Short Answer, Long Answer, and Case Study Based questions covering all Unit 8 topics. (See corresponding section in "Practicals" tab for exercise questions).
The following are to be included in the Practical File:
Examples/descriptions of these concepts are provided in the respective Unit tabs (Units 1, 6, 7).
Note: Snapshots of all the steps and outputs to be taken and pasted in the practical file.
Using available data on student enrollment, attendance, and dropout rates... create a compelling data story... Uncover trends, patterns, and correlations... Consider incorporating visualizations, charts, and graphs... Analyze external factors or events... Provide a comprehensive narrative... (Full description from PDF p.8).
...[Details from PDF p.9 are presented as list items]:
Please refer to the Project guidelines of CBSE-IBM AI PROJECT COOKBOOK (Page -2) - [Link/Source reference from PDF]
Note for Educators: Python or No code/low code platforms like Orange Data Mining tool can be chosen by the students for developing their Capstone Projects.
This section could also include all the exercise and case study questions listed at the end of each unit (PDF pp. 20-22, 40-48, 69-70, 90-91, 106-108, 127-128, 148-151, 165-168) for easy reference by students and educators. Adding them here would make the Practicals tab very long; therefore, referring to the corresponding unit tabs for detailed exercises is more organized in this multi-tab structure.
The list given below is suggestive and an exhaustive list should be compiled by the teacher(s) teaching the subject. Only basic tools, equipment and accessories should be procured by the Institution so that the routine tasks can be performed by the students regularly for practice and acquiring adequate practical experience.
...[Details from PDF pp.11-12 are summarized below]:
Qualification includes Diploma or Bachelor Degree in Computer Science/IT/Applications, or relevant certificate like PGDCA/DOEACCA Level. Higher qualifications are acceptable. Minimum competencies include 1 year work experience in same job role and communication skills.
States/UTs should follow a standardized selection procedure (Written test, Interview, Practical test). Trainers should undergo induction training and annual in-service training. Appraisal based on pre-established criteria and objectives (e.g., teaching methods, student results, knowledge up-gradation, industry linkages, student support).
(This is a placeholder for a future question generation tool. Currently displays sample questions from the syllabus.)
Support Quality Education!