TRANSPAC Development Log




Pay Notebook Creator: Roy Hyunjin Han0
Set Container: Numerical CPU with TINY Memory for 10 Minutes 0
Total0

Vision

Help future generations use existing tools.

Mission

Webify TLOSS.

Owner

Roy Hyunjin Han

Context

TLOSS is a FORTRAN program written by David Brown that measures electricity loss on high voltage transmission lines.

Timeframe

20170908-1330 - 20170922-1330: 2 weeks estimated

20170908-1330 - 20171230-1700: 4 months actual

Objectives

+ Webify TLOSS
+ Turn input text into tables
+ Turn output text into properties

Log

20170908-1330 - 20170908-1400: 30 minutes

FLNAME
RECORD
_ NOPT
RHO
NINS

gfortran -o tloss tloss.f
./tloss ldt800.dat x.log 100 1

+ Convert TLOSS live input into command line arguments

20170908-1400 - 20170908-1430: 30 minutes

+ Wrap with subprocess in notebook
+ Create notebook wrapper for fortran script
+ Save work

20170908-1600 - 20170908-1630: 30 minutes

+ Make raw tloss tool via cc.ini
+ Deploy raw tloss tool

Here is the raw tloss fortran program wrapped in a basic python script and run on different example datasets as provided in the original documentation:

https://crosscompute.com/t/FWibB29Z2WLHrlWIoycxzK5atBImzL8E

20170922-1100 - 20170922-1130: 30 minutes

The goal is to finish the first iteration of the TRANSPAC and PDPAC tools today. Let's first evaluate where we are.

+ Find where we have defined the TLOSS tool
+ Identify next steps

We'll have to postpone the direct notebook to tool conversion because of some complications with including extra code.

  1. Upgrade TLOSS tool.
  2. Convert remaining TRANSPAC and PDPAC tools.
  3. Deploy and send email to Alex.

Is this something that we can realistically finish today? Let's hope so. That will count as one iteration.

+ Ask Alex if there is associated documentation
+ Ask Alex which tools to convert into Python and in which order
_ Attempt to FORTRAN programs into Python

20170922-1145 - 20170922-1215: 30 minutes

The current strategy is that we will not touch the code for any of the modules and we will work entirely around it. Eventually, however, we should convert the code into Python, but I think we will reserve that for another contract.

For now, let's just focus on getting all the tools webified first. The idea is that we will convert the table into the format needed by this fortran program.

+ Review datasets
+ Examine source text

It looks like CSPAN.FOR is the only program that uses span length.

+ Find which tools use span length

Splitting the input into tables should be straightforward.

20170922-1515 - 20170922-1545: 30 minutes

We should have a table for the bundles and the shields. The CSPAN tool will also have a table for the spans. I think it is safe to assume that only one copy of the tool will be running at any given time.

+ Plan how to split it into tables

We need code that will convert a table into a format acceptable by the fortran program.

20170922-2030 - 20170922-2100: 30 minutes

20170928-1800 - 20170928-1830: 30 minutes

There is only one example that has all three tables, so let us use that one.

+ Decide which file to use as the example

The problem is that we have this number called the code. But the code actually only refers to three numerical quantities. Another option is that we make the code itself modifiable in some kind of lookup table. Perhaps that is best.

20171005-1815 - 20171005-1830: 15 minutes

20171102-1430 - 20171102-1500: 30 minutes

20171113-2215 - 20171113-2230: 15 minutes

+ See what we have right now
+ Check that it works
+ Make raw tloss tool via notebook
_ Work on converting each tool to Python

I don't think we should spend time converting the tools into Python.

+ Convert elec.dat into a table

20171114-0715 - 20171114-0730: 15 minutes

+ Update script to accept custom elec.dat path
+ Check that everything still works

20171114-0745 - 20171114-0800: 15 minutes

20171118-1715 - 20171118-1745: 30 minutes

20171121-1215 - 20171121-1245: 30 minutes

In [3]:
from pandas import read_csv
conductor_csv_table_path = '../Tools/conductors.csv'
conductor_csv_table = read_csv(conductor_csv_table_path).fillna('')
conductor_csv_table[-2:]
Out[3]:
<style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style>
Name Diameter (inches) Subconductor Resistance (ohms/mile) Reactance at 1 Foot Spacing (ohms/mile) Code
138 HD CU AWG 6 1 0.162 2.39 0.637 795
139 MISSING CONDUCTOR 999
In [20]:
line_template = '{:<20}{:<7}{:<10.4}{:<10.4}{:<17.4}'
lines = []
for index, x in conductor_csv_table.iterrows():
    lines.append(line_template.format(
        x['Name'],
        x['Code'],
        x['Diameter (inches)'],
        x['Subconductor Resistance (ohms/mile)'],
        x['Reactance at 1 Foot Spacing (ohms/mile)']))
print('\n'.join(lines[-3:]))
HD CU AWG 4 1       790    0.204     1.503     0.609            
HD CU AWG 6 1       795    0.162     2.39      0.637            
MISSING CONDUCTOR   999                                         
In [5]:
from os.path import join
target_folder = '/tmp'
target_path = join(target_folder, 'conductors.dat')
open(target_path, 'wt').write('\n'.join(lines))
Out[5]:
9099
In [6]:
import numpy as np
from pandas import DataFrame
t = DataFrame([
    (1, np.nan, np.nan),
])
t
Out[6]:
<style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style>
0 1 2
0 1 NaN NaN
In [60]:
t.fillna('')
Out[60]:
<style> .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } </style>
0 1 2
0 1
+ Write function to convert conductors.csv into elec.dat
+ Define save_conductors_dat
In [7]:
from pathlib import Path
In [12]:
p = Path('/tmp/xyz.csv')
p
Out[12]:
PosixPath('/tmp/xyz.csv')
In [14]:
open(str(p), 'wt').write('abcdef')
Out[14]:
6
In [16]:
p.open().read()
Out[16]:
'abcdef'
In [25]:
'{:<20}{:<7}{:<10.4}{:<10.4}{:<17.4}'.format(1, 2, '', '', '')
Out[25]:
'1                   2                                           '
+ Add table for elec.dat
+ Check that everything still works

20171230-1500 - 20171230-1700: 120 minutes

+ Split ldt345a.dat into three CSV tables
+ Convert example text file into tables
    + Make phase bundle table
    + Make shield bundle table
    + Make span elevation table
+ Write code to combine tables to text file for fortran script
+ Check that everything still works
+ Convert source text into tables
+ Convert target text into tables
+ Add text from comments into markdown
_ Webify each TRANSPAC tool
_ Webify each PDPAC tool
+ Draft webpage with links to each webified tool
_ Build tool that suggest optimizations and highlights savings
_ Make the interface for each tool more descriptive
_ Webify remaining TRANSPAC scripts by omitting the parts that require Graphoria
_ Webify CSPAN into multiple tools
_ Webify TLEFF if APPA can provide the example datasets
_ Webify PDPAC scripts if APPA can provide the example datasets    
+ Process tasks

Salah Ahmed created 17 tools from the TRANSPAC scripts. We did not webify the remaining TRANSPAC and PDPAC scripts.