Issue with Eric7 and Parallel Processing
Jamie Riotto
jamie.riotto at gmail.com
Wed Nov 15 15:33:17 GMT 2023
Ok, I've stripped it down to a almost-do-nothing parallel processing
script.
Eric6 - completes in 0.25 sec
Eric7 - never completes, although I did notice this time that upon
execution, Eric immediately showed it
stopped at a breakpoint on the first instruction in the script (import
multiprocessing), with the message
'<machine name./14224/debug_client_mp-fork waiting at breakpoint' and
'MainThread waiting at breakpoint'
even though I have no breakpoints set. Pressing 'continue F6' four times (I
assume thats because I have 4 queues)
the messages move from 'waiting at breakpoint' to 'running', but still no
termination, nor do I see any activity using
task master to view the processes.
Hope this helps... - jamie
========================================================================
import multiprocessing
from time import time
import random
def parallelSim(job_queue, output_queue):
for data in iter(job_queue.get, 'STOP'):
choices = random.choices(data, k=10)
total = 0
for i, c in enumerate(choices):
sign = 1 if i%2==0 else -1
total += c * c * sign
output_queue.put(total)
if __name__ == '__main__':
start_time = time()
job_queue = multiprocessing.Queue()
output_queue = multiprocessing.Queue()
# create some data
data = list(range(1, 1000))
# DEBUG
#numCPUs = multiprocessing.cpu_count()
numCPUs = 4
iterations = 10
numjobs = numCPUs * iterations
# load up the job queue
for sim in range(numjobs):
job_queue.put(data)
# add Stops to the job queue
for x in range(numCPUs):
job_queue.put('STOP')
serialDebug = False
if serialDebug is True:
# Debug the Parallel Process
parallelSim(job_queue, output_queue)
else:
# parallelize processing using a pool of processes
for i in range(numCPUs):
multiprocessing.Process(target=parallelSim, args=(job_queue,
output_queue)).start()
results = []
for r in range(numjobs):
results.append(output_queue.get())
avg_result = sum(results) / numjobs
print("")
print(f'Average Results = {avg_result}')
end_time = time()
elapsed = end_time - start_time
print(f"Finished in: {elapsed:.3f} seconds")
On Wed, Nov 15, 2023 at 4:28 AM Detlev Offenbach <detlev at die-offenbachs.de>
wrote:
> Hi Jaimie,
>
> would it be possible to create the sample script without the dependency
> to numpy? THat would make the task on my side much easier.
>
> Regards,
> Detlev
>
> Am 14.11.23 um 18:50 schrieb Jamie Riotto:
> > Dear Eric Folks,
> > I have recently upgraded from:
> > - Python 3.9.5, 64-bit
> > - Qt 5.15.2
> > - PyQt 5.15.4
> > - QScintilla 2.12.0
> > - sip 6.1.0.dev2104271705
> > - eric6 19.12(rev.274baadc5686)
> >
> > to:
> > - Python 3.9.13, 64-bit
> > - Qt 6.6.0
> > - PyQt 6.6.0
> > - QScintilla 2.14.1
> > - sip 6.7.12
> > - eric7 23.112(rev.e075c8fe07fd)
> >
> > After porting to PyQt6 (mostly enum fixes) most of my applications are
> > running just fine. However,
> > I have noticed big issues with parallel processing apps. I've
> > whittled down a Solitaire simulation
> > to a bare minimum to show the problem. The runs in about 1 sec (almost
> > all parallel queue setup)
> > on Eric6 / PyQt5, but never terminates or prints anything on Eric7 /
> > PyQt6. Also, when I set the
> > cpu count to number of cores (right now the code hardwires cpus = 1)
> > and the iterations to 30,000
> > the Eric6 process takes 105 seconds and pegs all 32 cores to 100%
> > almost immediately. On the other hand, the Eric7 process never show a
> > utilization of more than 7 or 8%.
> >
> > Any help would be greatly appreciated, along with any suggestions of
> > what I could try next.
> >
> > Here is the code if it helps:
> >
> >
> --------------------------------------------------------------------------------------------------------
> > import random
> > import itertools
> > import multiprocessing
> > import numpy as np
> > from time import time
> >
> >
> > #=====================================================================
> > #
> > # Parallel Solitaire Simulator
> > #
> >
> > def parallelSim(jobs, sims):
> > print(f'Sim # {sims}')
> > for new_deck, iterations, in iter(jobs.get, 'STOP'):
> > results = np.zeros(52, 'i')
> > for sim in range(iterations):
> > deck = new_deck.copy()
> > random.shuffle(deck)
> > piles = []
> > while len(deck):
> > piles.append([deck.pop()])
> >
> > # start playing
> > i = len(piles) - 1
> > while i > 0:
> > # check next pile Suit and Number
> > if (piles[i-1][-1][0] == piles[i][-1][0] or
> > piles[i-1][-1][1] == piles[i][-1][1]):
> >
> > # adjacent pile has a match, card goes there
> > piles[i-1].extend(piles[i])
> >
> > # left shift all the remaining used piles
> > piles[i:] = piles[i+1:]
> >
> > # adjust index to shifted pile
> > i = len(piles) - 1
> > continue
> > i -= 1
> >
> > results[len(piles)] += 1
> >
> > sims.put(results)
> > #
> > #====================================================================
> >
> >
> > if __name__ == '__main__':
> > killZombies()
> >
> > start_time = time()
> >
> > # randomize the randomizer
> > random.seed()
> >
> > numCPUs = multiprocessing.cpu_count()
> > numCPUs = 1
> > jobs = multiprocessing.Queue()
> > sims = multiprocessing.Queue()
> >
> > # create a deck of cards
> > new_deck = list(itertools.product('HCDS', range(1, 14)))
> >
> > iterations = 10
> > numJobs = numCPUs * iterations
> > for sim in range(numCPUs):
> > jobs.put([new_deck.copy(), iterations])
> >
> > # add Stops to the queues
> > for x in range(numCPUs):
> > jobs.put('STOP')
> >
> > serialDebug = False
> > if serialDebug is True:
> > parallelSim(jobs, sims)
> >
> > else:
> > # parallelize file processing using a pool of processes
> > for i in range(numCPUs):
> > multiprocessing.Process(target=parallelSim, args=(jobs,
> > sims)).start()
> >
> > results = np.zeros(52, 'i')
> > for r in range(numCPUs):
> > partial = sims.get()
> > results += partial
> >
> > print("")
> > result_indicies = np.nonzero(results)[0]
> > total = 0
> > for index in result_indicies:
> > print('{:d}:{:d}, '.format(index, results[index]), end="")
> > total += results[index]
> >
> > print("")
> > print("Wins = ", results[1])
> > print("Total = ", total)
> > print("Win Ratio = ", results[1]/total)
> > print("")
> >
> > end_time = time()
> > elapsed = end_time - start_time
> > print(f"Finished in: {elapsed:.3f} seconds")
> > print("Done")
> >
> >
> --
> Detlev Offenbach
> detlev at die-offenbachs.de
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.riverbankcomputing.com/pipermail/eric/attachments/20231115/57986ae9/attachment-0001.htm>
More information about the Eric
mailing list