对于Fortran程序的所有输入和输出,应使用命名管道,以避免写入磁盘。然后,在使用者中,您可以使用线程从程序的每个输出源读取信息,并将信息添加到队列中以进行顺序处理。

为了对此进行建模,我创建了一个python应用程序daemon.py,它从标准输入读取并返回平方根,直到EOF。它将所有输入记录到指定为命令行参数的日志文件中,并将平方根打印到stdout,将所有错误打印到stderr。我认为它模拟了你的程序(当然,输出文件的数量只有一个,但它可以缩放)。您可以查看此测试应用程序here的源代码。注意对stdout.flush()的显式调用。默认情况下,标准输出是打印缓冲的,这意味着这是最后的输出,消息不会按顺序到达。我希望您的Fortran应用程序不会缓冲其输出。我相信我的示例应用程序可能不会在Windows上运行,因为只使用了select,这在您的情况下应该无关紧要。

我有一个consumer应用程序,它将daemon应用程序作为子进程启动,stdin、stdout和stderr被重定向到subprocess.PIPEs。这些管道中的每一个都被赋予一个不同的线程,一个用于输入,三个分别处理日志文件、错误和标准输出。它们都将它们的消息添加到共享的Queue,主线程从该共享Queue中读取并发送给解析器。

这是我的消费者代码:import os, random, time

import subprocess
import threading
import Queue
import atexit
def setup():
# make a named pipe for every file the program should write
logfilepipe='logpipe'
os.mkfifo(logfilepipe)
def cleanup():
# put your named pipes here to get cleaned up
logfilepipe='logpipe'
os.remove(logfilepipe)
# run our cleanup code no matter what - avoid leaving pipes laying around
# even if we terminate early with Ctrl-C
atexit.register(cleanup)
# My example iterator that supplies input for the program. You already have an iterator
# so don't worry about this. It just returns a random input from the sample_data list
# until the maximum number of iterations is reached.
class MyIter():
sample_data=[0,1,2,4,9,-100,16,25,100,-8,'seven',10000,144,8,47,91,2.4,'^',56,18,77,94]
def __init__(self, numiterations=1000):
self.numiterations=numiterations
self.current = 0
def __iter__(self):
return self
def next(self):
self.current += 1
if self.current > self.numiterations:
raise StopIteration
else:
return random.choice(self.__class__.sample_data)
# Your parse_func function - I just print it out with a [tag] showing its source.
def parse_func(source,line):
print "[%s] %s" % (source,line)
# Generic function for sending standard input to the problem.
# p - a process handle returned by subprocess
def input_func(p, queue):
# run the command with output redirected
for line in MyIter(30): # Limit for testing purposes
time.sleep(0.1) # sleep a tiny bit
p.stdin.write(str(line)+'\n')
queue.put(('INPUT', line))
p.stdin.close()
p.wait()
# Once our process has ended, tell the main thread to quit
queue.put(('QUIT', True))
# Generic function for reading output from the program. source can either be a
# named pipe identified by a string, or subprocess.PIPE for stdout and stderr.
def read_output(source, queue, tag=None):
print "Starting to read output for %r" % source
if isinstance(source,str):
# Is a file or named pipe, so open it
source=open(source, 'r') # open file with string name
line = source.readline()
# enqueue and read lines until EOF
while line != '':
queue.put((tag, line.rstrip()))
line = source.readline()
if __name__=='__main__':
cmd='daemon.py'
# set up our FIFOs instead of using files - put file names into setup() and cleanup()
setup()
logfilepipe='logpipe'
# Message queue for handling all output, whether it's stdout, stderr, or a file output by our command
lq = Queue.Queue()
# open the subprocess for command
print "Running command."
p = subprocess.Popen(['/path/to/'+cmd,logfilepipe],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Start threads to handle the input and output
threading.Thread(target=input_func, args=(p, lq)).start()
threading.Thread(target=read_output, args=(p.stdout, lq, 'OUTPUT')).start()
threading.Thread(target=read_output, args=(p.stderr, lq, 'ERRORS')).start()
# open a thread to read any other output files (e.g. log file) as named pipes
threading.Thread(target=read_output, args=(logfilepipe, lq, 'LOG')).start()
# Now combine the results from our threads to do what you want
run=True
while(run):
(tag, line) = lq.get()
if tag == 'QUIT':
run=False
else:
parse_func(tag, line)

我的迭代器返回一个随机输入值(其中一些是垃圾,会导致错误)。你的应该是替补。程序将运行到输入结束,然后等待子进程完成,然后将QUIT消息排入主线程。我的parse_func显然非常简单,只需打印出消息及其源代码,但是您应该能够处理一些事情。从输出源读取的函数设计为同时使用管道和字符串-不要打开主线程上的管道,因为在输入可用之前,它们会阻塞。因此,对于文件读取器(例如读取日志文件),最好让子线程打开文件并阻止。但是,我们在主线程上生成子进程,以便可以将stdin、stdout和stderr的句柄传递给它们各自的子线程。