Quote:
Originally Posted by charybdis
I've been following this discussion a bit and I'd like to do one of the homogeneous Cunningham c177s with your parameters (and ~160 cores for sieving), but to give a bit of variety I'm thinking of doing it with A=28. Are there any other changes I ought to make to compensate for the smaller sieve region?

I think A=28 is optimal for this size, but this is just a guess really I'm happy to hear you'll try it!
Here's what I would change, and why:
The duplicate rate is often a bit higher when using a smaller siever, so you may need more than 270M relations. I estimate a matrix would build for Ed on I=15 with 250M, and I added 20M because he uses a farm to sieve but his "main" machine isn't very fast so he is willing to sacrifice some sieve time to reduce matrix time. Our experience with ggnfs sievers is that 1015% relations are needed on 14e vs 15e; since A=28 is halfway in between, we can guess 58% more relations will be needed. 8% more than 250M is 270M, so if you don't mind a longish matrix you could leave it at 270M. I would personally choose 285M for A=28, and see what happens.
If yield isn't very good, you can relax the lambda settings a bit, like 0.01 each. This will increase the relations required, though those complicated interactions between lambda/sieve speed/relations needed are why I do 810 factorizations at a given size before publishing parameters.
I would also increase each lim by 15% or so, say to lim0=105M and lim1= 160M. I don't have a good reason for this, other than that ggnfs sievers see yield fall off markedly when Q > 2 * lim0. Even with CADO, I have found that choosing lim's such that Q sieved does not exceed lim1 is always faster than otherwise (where "always" is for all tests below 160 digits). I believe Ed's I=15 job should finish when Q is in the 100130M range. Using A=28 will need roughly 50% more Q, 150190M as final Q. So I'm suggesting lim1 equal to my guess at final Q; note that since you're doing C177 rather than C175, you might add another 10% to both lim's, to e.g. 115M and 175M.
Larger lim's improve yield (relations per Qrange) at the expense of a little speed.
Finally, 2 extra digits of difficulty is about 25% harder, so I'd add 25% to poly select: Change admax from 12e5 to 15e5.
If you'd like to contribute to refining these parameters going forward, I'd like to know the final Q sieved, the number of relations you generated (that is, the rels_wanted you chose), the number of unique relations, and the matrix size (total weight is a proxy for size, but it's nice to have both row count and total weight). Timing info is only useful if you plan to factor multiple numbers with your setup obviously, if you do a second one of similar size, say within 3 digits, we can compare the timings and conclude which params were better.
Good luck!