Link to Video Tutorial -> https://www.youtube.com/watch?v=hDopw1mPlrU
Step-1 ->Create a seprate calulated table as shown below
Link to Video Tutorial -> https://www.youtube.com/watch?v=hDopw1mPlrU
Step-1 ->Create a seprate calulated table as shown below
try:
# Code that may raise an exception
with pd.ExcelWriter(file, engine='openpyxl', mode='a', if_sheet_exists='replace') as writer:
partspec_insert.to_excel(writer, sheet_name='Existing Spec', index=False)
except Exception as e:
# Handle the exception (e.g., log it, print it, or pass)
print(f"An error occurred: {e}")
# You can use 'pass' to ignore the error and continue execution
pass
# Continue with the rest of your code
print("Execution continues even if an error occurred.")
# Execute query and fetch results into a DataFrame
df = pd.read_sql(sql_query, conn)
# Check if the DataFrame is empty
if df.empty:
print("The result set is empty.")
else:
print("The result set is not empty.")
# Process the DataFrame (example)
print(df)
# Close connection
conn.close()
from sqlalchemy import create_engine, text
# Define the connection string
connection_string = "mssql+pyodbc://your_username:your_password@your_server_name/your_database_name?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine
engine = create_engine(connection_string)
# Connect to the database
with engine.connect() as connection:
# Define the SQL UPDATE query
Part_Rev_inactivate = text("""
UPDATE [Dbo].[PartRevDimension]
SET PartStatus = 0
WHERE PartId = :part_id
AND PartRev = :part_rev
""")
# Define the parameters
part_id = 'your_part_id_value'
part_rev = 'your_part_rev_value'
# Execute the query
connection.execute(Part_Rev_inactivate, {'part_id': part_id, 'part_rev': part_rev})
# Commit the transaction
connection.commit()
print("Value updated successfully.")
Explanation:
- Replace
your_server_name
, your_database_name
, your_username
, and your_password
with your actual SQL Server connection details. - Replace
your_part_id_value
and your_part_rev_value
with the actual values for part_id
and part_rev
. - The
text
function is used to create a SQL text query. - The parameters are passed as a dictionary to the
execute
method. - The
connection.execute
method is used to execute the update statement with the provided parameters. - The
connection.commit
method is used to commit the transaction, making the changes permanent. - The
with
statement ensures that the connection is properly closed after the block of code is executed.
Make sure to handle exceptions and errors in a production environment to ensure the robustness of your code.
The error you're encountering is because Specresult
is a CursorResult
object, which cannot be directly concatenated with a pandas DataFrame. You need to convert the CursorResult
to a pandas DataFrame first. Here's the corrected code:
from sqlalchemy import create_engine
import pandas as pd
Existing_Spec_Query = text("""
SELECT *
FROM [LSQD].[PartSpec]
WHERE PartId = :part_id AND PartRev= :Part_Rev
""")
# Assuming 'engine' is already defined and connected to your database
with engine.connect() as connection:
# Execute the query to fetch existing specifications
Specresult = connection.execute(Existing_Spec_Query, {'part_id': part_id, 'Part_Rev': Part_Rev})
# Fetch all rows from the result set and convert to DataFrame
existing_Spec = pd.DataFrame(Specresult.fetchall(), columns=Specresult.keys())
print(existing_Spec)
# End of Getting Existing Part data
# Concatenate the new specifications with the existing ones
partspec_insert = pd.concat([partspec_insert, existing_Spec], ignore_index=True)
# Update the 'PartRev' column to increment the revision number
partspec_insert['PartRev'] = part_rev[0] + 1
Explanation:
- Fetch Results: The
Specresult.fetchall()
method fetches all rows from the result set. - Convert to DataFrame: The
pd.DataFrame(Specresult.fetchall(), columns=Specresult.keys())
converts the fetched results into a pandas DataFrame. - Concatenate DataFrames: The
pd.concat([partspec_insert, existing_Spec], ignore_index=True)
concatenates the new specifications with the existing ones.
This should resolve the error and allow you to concatenate the dataframes successfully. If you have any further questions or need additional assistance, please let me know!