Closed Bug 719491 Opened 12 years ago Closed 12 years ago

Add b2g to the list of builds

Categories

(Release Engineering :: General, defect, P3)

x86_64
Linux
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: fabrice, Assigned: jhford)

References

Details

(Whiteboard: [project])

Attachments

(9 files, 3 obsolete files)

605 bytes, text/plain
Details
990 bytes, patch
bhearsum
: review+
Details | Diff | Splinter Review
779 bytes, patch
Details | Diff | Splinter Review
47.58 KB, text/plain
Details
29.09 KB, patch
bhearsum
: review+
jhford
: checked-in+
Details | Diff | Splinter Review
7.81 KB, patch
bhearsum
: review+
jhford
: checked-in+
Details | Diff | Splinter Review
1.87 KB, patch
rail
: review+
Details | Diff | Splinter Review
2.47 KB, patch
jhford
: review+
jhford
: checked-in+
Details | Diff | Splinter Review
876 bytes, patch
jhford
: review+
nthomas
: checked-in+
Details | Diff | Splinter Review
Currently we don't build b2g and we already had to fix regressions because of this.
So what would this entail? The 'make gecko' part of the B2G build (basically building with B2G's mozconfig and platform toolchain)?
A build with gonk widget enabled.
(In reply to Mounir Lamouri (:volkmar) (:mounir) from comment #2)
> A build with gonk widget enabled.

and --enable-b2g-ril plz
* Do these builds work on the tryserver yet? 
* Are there missing tools we (releng) need to install on the slaves before they will work?
* What branches would we be running these against? m-c? m-i?
Priority: -- → P3
Whiteboard: [project]
We need that on m-c, m-i (and try if possible).

Slaves need to have the gonk platform toolchain, which is slightly different from the standard android one we use for fennec. It's mostly the https://github.com/andreasgal/B2G/tree/master/glue/gonk repository.
I'd like to also point out that we do builds on each github commit over here: 
http://builder.boot2gecko.org/

This just gets the code, builds b2g from scratch and makes sure there are no errors. We also have a test slave machine that runs a small test on each of these builds(https://github.com/jonallengriffin/marionette_client/blob/master/marionette/tests/emulator/test_battery.py), but this test has been failing recently due to battery API changes. Unfortunately, the results of these tests can't be seen at the moment. since brasstacks (which hosts autolog) is down, but you can see the build status at the http://builder.boot2gecko.org/ site.
A wrinkle here is that the gecko-gonk toolchain relies on gonk libraries.  The set of libraries won't grow, but the gonk libraries (code outside of m-c) are also under development, so there may be times when the tree turns red until we can update the gonk "toolchain".

I'm fine with that as long as we can get a relatively quick turnaround on "toolchain" updates.  We'll know which gecko patches require them so can plan ahead.
Bug 723148 is another instance of the b2g build being broken because nothing shows up on tbpl.
When/how/with whom can we chat about moving this forward?
Depends on: 723148
Sorry, it's not really clear what you need here.

A few basic questions:

* why aren't the builds mdas referenced in comment 6 sufficient?

* where does the source live?

* how is it built? any special instructions?

* is it working on try right now?

* what dependencies are there, where do we get them, how should they be deployed?

* what tests get run? how?
(In reply to Chris AtLee [:catlee] from comment #10)
> Sorry, it's not really clear what you need here.
> 
> A few basic questions:
> 
> * why aren't the builds mdas referenced in comment 6 sufficient?
> 

They don't turn tryserver/inbound tbpl red.

> * where does the source live?
> 

mozilla-central

> * how is it built? any special instructions?
> 

As we explained above, basically you need a "b2g toolchain".  Then a b2g mozconfig.  Both live in https://github.com/andreasgal/b2g.  Do you need the full gory details here or is that for another forum?

> * is it working on try right now?
> 

No, that's the problem we want to solve.

> * what dependencies are there, where do we get them, how should they be
> deployed?
> 

B2G toolchain.  B2G repository.  |git pull && make| when we make changes that affect the toolchain (relatively rare).

> * what tests get run? how?

Running tests is a harder problem.  Let's punt.  Builds first.
Re: b2g "toolchain", it consists of

 * gcc compiler suite.  This is shipped prebuilt in the b2g repository.

 * Gonk "system libraries" that b2g links against.  These are built during the normal b2g build process.

The system libraries are required for b2g builds.  There are several models we can use to keep those up-to-date, depending on what's best for our infra.

What might be the easiest model is that you guys

 * git clone https://github.com/andreasgal/b2g somewhere

 * |make sync && make config-qemu && make gonk| to set up the system libs

 * deploy (???)

When we need an update of the "b2g toolchain" (i.e., compiler suite changes or system libs change), we can file a bug on updating infra.  The update process would look like

 * |make sync && make gonk|

 * deploy (???)

Obviously, the deploy part I know nothing about.
Catlee any other information you guys need?
Depends on: 725312
The b2g build is broken once again (see bug 725312)

Can we try to make some progress, or have a plan with an ETA?
Depends on: 725433
How often do you expect the toolchain to change? Building, packaging and deploying the new toolchain will take around 48 hours each time. We also do not have git on the machines, so there's some extra upfront work to do there.

Which platforms should this be built on? 32-bit or 64-bit linux only?
Depends on: 725436
(In reply to Chris AtLee [:catlee] from comment #15)
> How often do you expect the toolchain to change? Building, packaging and
> deploying the new toolchain will take around 48 hours each time. We also do
> not have git on the machines, so there's some extra upfront work to do there.
> 

At most, I would expect every other week.

> Which platforms should this be built on? 32-bit or 64-bit linux only?

b2g builds on either (as long as the 64-bit linuces have 32-bit compat libs).
So, I tried cloning B2G and running 'make sync' died after using up 1.9GB and running out of inodes on the machine.

Is there a better way to get the toolchain?
No.  There are more painful ways though.  Up to you.
So I think an alternate solution that's not so painful would be to add a |make toolchain| build target, possibly run on every full build on the existing builders.  That just involves zipping up some stuff, so should be pretty fast.

Then when we need toolchain updates, we can send a URL of a zip or whatever on the builders.  How does that sound?
(In reply to Chris Jones [:cjones] [:warhammer] from comment #19)
> So I think an alternate solution that's not so painful would be to add a
> |make toolchain| build target, possibly run on every full build on the
> existing builders.  That just involves zipping up some stuff, so should be
> pretty fast.
> 
> Then when we need toolchain updates, we can send a URL of a zip or whatever
> on the builders.  How does that sound?

That's more how we do things for other items in "external" (non m-c) repos. I like that because then when you guys (b2g) have a toolchain change, you can stage your zip someplace, and then throw a checkin to try and have it build using your new zip.
That way you don't need releng to stage everytime you change the toolchain.
I uploaded toolchain "0" to http://people.mozilla.com/~cjones/gonk-toolchain-0.tar.bz2 .  Attached is a mozconfig that assumes the toolchain is installed to /home/cjones/mozilla/.  Obviously that will need to change.

After that, building is pretty straightforward.  The one snag is that the android-gcc linker runs out of fds when linking libxul.so for b2g, in the default settings of most linux distros.  This is really annoying.  The "fix" is up the fd ulimit.  Here's the script I used to build with the attached mozconfig
-----
ulimit -n 4096
export MOZCONFIG="${HOME}/mozilla/mozilla-central/gonk-config-debug"
make -f client.mk build
-----

We'll need to do something similar on our infra.

The toolchain should work without any other dependencies on linux-86 and linux-86_64.  (Beyond what m-c requires.)

Since we're cross-compiling, we have the capability to build on darwin-x86, darwin-x86_64, windows, and windows-x86_64.  We could use that to distribute load.  However, I don't think our infra supports this, and it might be more trouble than it's worth.  But something to keep in mind.

Support for building the toolchain is upstream in b2g: |make config-qemu && make package-toolchain| spits it out.  So it's quite easy for anyone to update the toolchain, from wherever there's an up-to-date b2g clone.

Let me know if I can help more.  We really really really want to get this hooked up asap :).
Addendum: I compressed with bz2 because it ended up doing the best job.  It's trivial to switch to gzip or zip if we don't have bz2 support wherever this package needs to go.
Please let us know what the next steps are here.
Assignee: nobody → jhford
(In reply to Chris Jones [:cjones] [:warhammer] from comment #23)
> Please let us know what the next steps are here.

the next steps here are:
-bug 727123, setting up a way to allow easy toolchain deployments to slaves
-test toolchain from comment 21 on an existing linux slave
-modify buildbot-configs and buildbotcustom to add the builders to the relevant branches in this bug
-file a bug to have b2g support added to TBPL

For now, I'd like to stick with doing builds on linux for expedience's sake, but it is very interesting that there are no specific Linux dependencies.

I do have a couple of questions,

1) is the toolchain relocatable?  If it isn't, can you please rebuild using the equivalent of autoconf's --prefix=/tools/dev-toolchains/gonk/0/ ?  

2) do you need anything uploaded?  I plan on doing ftp.m.o logs, but are there build artifacts that are useful for you?
(In reply to John Ford [:jhford] from comment #24)
> (In reply to Chris Jones [:cjones] [:warhammer] from comment #23)
> 1) is the toolchain relocatable?  If it isn't, can you please rebuild using
> the equivalent of autoconf's --prefix=/tools/dev-toolchains/gonk/0/ ?  
> 

Yes, it's relocatable.

> 2) do you need anything uploaded?  I plan on doing ftp.m.o logs, but are
> there build artifacts that are useful for you?

No, just build logs are fine for now.
Depends on: 730968
Looks like the gonk toolchain requires newer glibc and the gold linker also requires a new libstdc++ as well.  Our systems have glibc-2.5.12 and libstdc++-4.1.1.

Chris, would be able to rebuild the toolchain as static binaries?
How does this work for android builds?  We're using essentially the same stuff.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #27)
> How does this work for android builds?  We're using essentially the same
> stuff.

maybe it doesn't require newer glibc/libstdc++, but the binaries in your tarball are definitely linked against newer versions.  I'll try building the b2g toolchain using your instructions in comment 12.


[cltbld@mv-moz2-linux-ix-slave01 bin]$ pwd
/tools/android-ndk/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin
[cltbld@mv-moz2-linux-ix-slave01 bin]$ strings * | grep GLIBC | sort -u
GLIBC_2.0
GLIBC_2.1
GLIBC_2.1.3
GLIBC_2.2
GLIBC_2.2.3
GLIBC_2.3
GLIBC_2.3.4

[cltbld@mv-moz2-linux-ix-slave01 bin]$ pwd
/builds/gonk-test/gonk-toolchain-0/prebuilt/linux-x86/toolchain/arm-eabi-4.4.3/bin
[cltbld@mv-moz2-linux-ix-slave01 bin]$ strings * | grep GLIBC | sort -u
GLIBC_2.0
GLIBC_2.1
GLIBC_2.11
GLIBC_2.1.3
GLIBC_2.2
GLIBC_2.2.3
GLIBC_2.3
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.7
GLIBC_2.8
GLIBCXX_3.4
GLIBCXX_3.4.10
GLIBCXX_3.4.11
I can't pull the toolchain repository.  I tried on my laptop and a build slave, below is the build slave's response:

[cltbld@mv-moz2-linux-ix-slave01 gonk-test]$ GIT_SSL_NO_VERIFY=true /tools/git-1.7.8.2/bin/git clone https://github.com/andreasgal/b2g
Cloning into 'b2g'...
fatal: https://github.com/andreasgal/b2g/info/refs not found: did you run git update-server-info on the server?
try cloning with |hg clone https://github.com/andreasgal/B2G.git|
(In reply to Fabrice Desré [:fabrice] from comment #30)
> try cloning with |hg clone https://github.com/andreasgal/B2G.git|

that didn't work either, but git://github.com/andreasgal/B2G.git did.
Depends on: 731656
trying to build the gonk toolchain didn't work because the prebuilt toolchain used to build the gonk toolchain is the same copy of the gcc toolchain that is in the tarball from comment 21.

I am going to look at solving this with mock, basically creating a complete chroot of a system that has modern dependencies.  To speed things up, a list of tools and libraries that I need to run the builds would be great.
Depends on: 731885
I hit this error while building under mock.  It looks like breakpad is trying to build dump_syms as a build host binary.  I doubt that this tool is useful for b2g.  I've added

ac_add_option --disable-crashreporter

to my mozconfig file.  I've restarted the build.

c++ -o dump_syms   -static host_dump_syms.o ../../../../../../../toolkit/crashreporter/google-breakpad/src/common/linux/libhost_breakpad_linux_common_s.a ../../../../../../../toolkit/crashreporter/google-breakpad/src/common/libhost_breakpad_common_s.a ../../../../../../../toolkit/crashreporter/google-breakpad/src/common/dwarf/libhost_breakpad_dwarf_s.a  
/usr/bin/ld: cannot find -lstdc++
/usr/bin/ld: cannot find -lm
/usr/bin/ld: cannot find -lc
collect2: ld returned 1 exit status
Good news!  I was able to build b2g inside of a mock chroot.  I did need to raise the file descriptor limit to 4096 as suggested to get it to link.  The commands I ran to do the build were:


mock_mozilla -r mozilla-f16-i386 --init
(cd /builds/targets/mozilla-f16-i386/builds/ && hg clone http://hg.mozilla.org/mozilla-central && cd mozilla-central && wget http://people.mozilla.org/~cjones/gonk-toolchain-0.tar.bz2 && tar jxf gonk-toolchain-0.tar.bz2 && wget -O gonk-debug-mozconfig https://bug719491.bugzilla.mozilla.org/attachment.cgi?id=596002)
mock_mozilla -r mozilla-f16-i386 --install python autoconf213 zip
mock_mozilla -r mozilla-f16-i386 --cwd /builds/mozilla-central --shell '/bin/bash -c "ulimit -n 4096 ; /usr/bin/env MOZCONFIG=gonk-debug-mozconfig make -f client.mk build"'

The last command is pretty ugly because of how the mock_mozilla.py script is written right now.  Fixing that does not block deploying b2g.  The packaging scripts ran successfully but as stated above doesn't need to be uploaded yet.

I tested this using a CentOS 6.2 build host using a Fedora 16 build environment.  The build environment was completely empty aside from a couple tools that were explicitly installed and a couple very basic packages, essentially enough to run a simple shell.  I am going to test this on an IX machine as well as a DL120.

The next steps here are:
1)deploy mock -- bug 731885
2)modify buildbot to use mock to build b2g (bug 727123 for toolchain deployment, need to file changes to consume toolchain)
3) ???
4) profit
Depends on: 732248
Depends on: 732291
Depends on: 733099
Depends on: 735305
Depends on: 735954
Cribbed from mobile
Attachment #606285 - Flags: review?(bhearsum)
Attachment #606285 - Flags: review?(bhearsum) → review+
(In reply to Philipp von Weitershausen [:philikon] from comment #3)
> (In reply to Mounir Lamouri (:volkmar) (:mounir) from comment #2)
> > A build with gonk widget enabled.
> 
> and --enable-b2g-ril plz

Phil, sorry I missed this.  Please land whatever changes you think should be made to http://hg.mozilla.org/mozilla-central/raw-file/default/b2g/config/mozconfigs/linux32/debug
--enable-b2g-ril only applies to desktop builds.  This bug covers the b2g builds.  Don't worry about those for now.
Chris, I am finding that my staging build died with

make[7]: Entering directory `/builds/slave/m-cen-b2g/build/objdir-prof-gonk/toolkit/crashreporter/google-breakpad/src/tools/linux/dump_syms'
dump_syms.cc
/usr/bin/ld: cannot find -lstdc++
/usr/bin/ld: cannot find -lm
/usr/bin/ld: cannot find -lc

It is trying to compile dump_syms, which is compiled for the host arch.  Because the mock environment has the minimal subset of programs and libraries required to build, there isn't a fully functional toolchain in the build environment.

I've found that adding 

ac_add_options --disable-crashreporter

means that dump_syms no longer builds.  Is this change OK?  We can install the required libraries to build dump_syms if you'd prefer.

I have also made two other small changes.  Regardless of the outcome of the crashreporter, I'd like to land these changes if it's ok with you.
Attachment #606415 - Flags: review?(jones.chris.g)
Comment on attachment 606415 [details] [diff] [review]
disable-crashreporter

>diff --git a/b2g/config/mozconfigs/linux32/debug b/b2g/config/mozconfigs/linux32/debug

>+mk_add_options MOZ_MAKE_FLAGS="-j8"
> 

Do you really want to turn "-s" off?  If this is going to a log, it's
going to create 10s of MBs of mostly useless spew in the worst case.
If this is consistent with the other mozconfigs, OK.

>+ac_add_options --disable-crashreporter

I prefer installing the required libs to keep this on.  The
crashreporter is some of the ickiest and most delicate code in m-c, so
it's something we don't want to be not-compiling in m-c for a long
time and then have to fix in a rush.
Attachment #606415 - Flags: review?(jones.chris.g)
(In reply to Chris Jones [:cjones] [:warhammer] from comment #39)
> Comment on attachment 606415 [details] [diff] [review]
> disable-crashreporter
> 
> >diff --git a/b2g/config/mozconfigs/linux32/debug b/b2g/config/mozconfigs/linux32/debug
> 
> >+mk_add_options MOZ_MAKE_FLAGS="-j8"
> > 
> 
> Do you really want to turn "-s" off?  If this is going to a log, it's
> going to create 10s of MBs of mostly useless spew in the worst case.
> If this is consistent with the other mozconfigs, OK.

Yep, this is consistent with all of our other logs.  The extra info is often critical for debugging failures without having rerun the entire build.  I'm going to land this and the objdir change.

> >+ac_add_options --disable-crashreporter
> 
> I prefer installing the required libs to keep this on.  The
> crashreporter is some of the ickiest and most delicate code in m-c, so
> it's something we don't want to be not-compiling in m-c for a long
> time and then have to fix in a rush.

Ok!  I'll install those libraries and get it working.
Alrighty, r=me on the first two changes.
Ok, looks like all I was missing was the static glibc and libstdc++ libraries.  My test build died, but that was during linking and was because I haven't yet set the max fd limit to 4096.  I've now set the FD limit to 4096 and am starting a new test build
Attached patch buildbot-configs v1 (obsolete) — Splinter Review
These are the buildbot-configs patches.
Attachment #606697 - Flags: review?(catlee)
Attached patch buildbotcustom v1 (obsolete) — Splinter Review
Comment on attachment 606699 [details] [diff] [review]
buildbotcustom v1

silly bugzilla posting before i intended to submit
Attachment #606699 - Attachment is obsolete: true
Attachment #606699 - Attachment is patch: true
Attached patch buildbotcustom v2 (obsolete) — Splinter Review
These are the buildbotcustom changes needed.  I was able to rework a lot of the scratchbox tools to support building with mock.  I have intermediate patches in an ugly git repository that I could give you if you'd like to see that.
Attachment #606706 - Flags: review?(catlee)
git log -p ftw!
Comment on attachment 606697 [details] [diff] [review]
buildbot-configs v1

Review of attachment 606697 [details] [diff] [review]:
-----------------------------------------------------------------

::: mozilla/config.py
@@ +80,4 @@
>      'enable_blocklist_update': False,
>      'blocklist_update_on_closed_tree': False,
>      'enable_nightly': True,
> +    'enabled_products': ['firefox', 'mobile', 'b2g'],

Does this enable b2g for all branches? If so, is that what we want?
Comment on attachment 606706 [details] [diff] [review]
buildbotcustom v2

Review of attachment 606706 [details] [diff] [review]:
-----------------------------------------------------------------

r=me with the uploadPackages change.

::: process/factory.py
@@ +755,5 @@
> +                 tooltool_manifest_src=None,
> +                 tooltool_bootstrap="setup.sh",
> +                 tooltool_url_list=[],
> +                 tooltool_script='/tools/tooltool.py',
> +                 run_upload=True,

As mentioned on IRC, please use the existing uploadPackages instead of this. It currently controls packaging as well as uploading so you'll have to split that out.
Attachment #606706 - Flags: review?(catlee) → review-
(In reply to Ben Hearsum [:bhearsum] from comment #48)
> Does this enable b2g for all branches? If so, is that what we want?

Yes.  There is a dedicated pool of machines of 35+ fast machines that will be running B2G builds for now with very little stage activity so this won't negatively impact our other builds.

(In reply to Ben Hearsum [:bhearsum] from comment #49)
> As mentioned on IRC, please use the existing uploadPackages instead of this.
> It currently controls packaging as well as uploading so you'll have to split
> that out.

Yep, I'll post a new patch once I've done this.
Attachment #606697 - Flags: review?(catlee)
Split upload and packaging logic into two functions.  Multilocale is still done in the upload section because it *has* to be done between make upload calls and I don't want to add a bunch of extra complexity about how that is handled
Attachment #606706 - Attachment is obsolete: true
Attachment #607692 - Flags: review?(bhearsum)
changes needed to make buildbotcustom-v3 work
Attachment #606697 - Attachment is obsolete: true
Attachment #607694 - Flags: review?(bhearsum)
Attachment #607694 - Attachment is patch: true
Attachment #607694 - Flags: review?(bhearsum) → review+
Comment on attachment 607692 [details] [diff] [review]
buildbotcustom v3

Review of attachment 607692 [details] [diff] [review]:
-----------------------------------------------------------------

::: misc.py
@@ +976,5 @@
>              # Platform already has the -debug suffix
>              unittestBranch = "%s-%s-unittest" % (name, platform)
>              tinderboxBuildsDir = "%s-%s" % (name, platform)
> +        elif 'b2g' in platform:
> +            uploadPackages = False

Is there a reason you've hardcoded this instead of looking it up from the config, which has an uploadPackages entry?
(In reply to Ben Hearsum [:bhearsum] from comment #53)
> Comment on attachment 607692 [details] [diff] [review]
> buildbotcustom v3
> 
> Review of attachment 607692 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: misc.py
> @@ +976,5 @@
> >              # Platform already has the -debug suffix
> >              unittestBranch = "%s-%s-unittest" % (name, platform)
> >              tinderboxBuildsDir = "%s-%s" % (name, platform)
> > +        elif 'b2g' in platform:
> > +            uploadPackages = False
> 
> Is there a reason you've hardcoded this instead of looking it up from the
> config, which has an uploadPackages entry?

Because this value is hardcoded a couple lines above the context with

uploadPackages = True

and uploadPackages is never actually read.  I guess I could do

> > +        elif 'b2g' in platform:
> > +            uploadPackages = pf.get('uploadPackages', False)

I've made this change to my local working copy

diff --git a/misc.py b/misc.py
--- a/misc.py
+++ b/misc.py
@@ -976,6 +976,8 @@ def generateBranchObjects(config, name, 
             # Platform already has the -debug suffix
             unittestBranch = "%s-%s-unittest" % (name, platform)
             tinderboxBuildsDir = "%s-%s" % (name, platform)
+        elif 'b2g' in platform:
+            uploadPackages = pf.get('uploadPackages', False)
         else:
             if pf.get('enable_opt_unittests'):
                 packageTests=True
(In reply to John Ford [:jhford] from comment #54)
> I've made this change to my local working copy
> 
> diff --git a/misc.py b/misc.py
> --- a/misc.py
> +++ b/misc.py
> @@ -976,6 +976,8 @@ def generateBranchObjects(config, name, 
>              # Platform already has the -debug suffix
>              unittestBranch = "%s-%s-unittest" % (name, platform)
>              tinderboxBuildsDir = "%s-%s" % (name, platform)
> +        elif 'b2g' in platform:
> +            uploadPackages = pf.get('uploadPackages', False)
>          else:
>              if pf.get('enable_opt_unittests'):
>                  packageTests=True

r=me with this change.
Comment on attachment 607692 [details] [diff] [review]
buildbotcustom v3

Review of attachment 607692 [details] [diff] [review]:
-----------------------------------------------------------------

Please take note of comment #55 when landing.
Attachment #607692 - Flags: review?(bhearsum) → review+
Depends on: 738302
Could someone please summarize for me what's left to do and who is doing it?
(In reply to Bob Moss :bmoss from comment #57)
> Could someone please summarize for me what's left to do and who is doing it?

Bob, the only thing left to do here is land my patches and reconfigure the buildbot masters.  I am doing both of these tasks and I am planning on doing them first thing Monday morning.
Excellent, and Thanks.
Depends on: 739392
Two issues prevented me from landing the changes earlier today.

The first issue was that the options needed to build the b2g gecko component changed and those changes weren't landed in the checked in mozconfig in the tree.  I worked with the b2g team to get these changes landed (bug 739392).  This was compounded by changes that broke the build even with the updated mozconfig file.

The second issue was that there were changes landed on buildbotcustom that contained a bug.  This issue was fixed [1] and I filed a bug so that the already written tests to prevent this issue are run in our preproduction test environment (bug 739486).

I am going to land these changes tonight and do the reconfig tomorrow.


[1]http://hg.mozilla.org/build/buildbotcustom/rev/1e0277f9acb8
deploy a BuildSlaves.py file that contains the appropriate BuildSlave passwords
Attachment #609572 - Flags: review?(rail)
Attachment #609572 - Flags: review?(rail) → review+
Attached patch fix testsSplinter Review
This fixes preproduction mozilla_config_tests (trial test/test_slave_allocation.py), which assumes that all production slaves should be listed as staging/preproduction as well to simplify slave movements.

Feel free to land anytime.
Attachment #609698 - Flags: review?(jhford)
this was deployed during a reconfig today.  please file any issues with these builds as new bugs
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Depends on: 739736
puppet on preproduction-stage is complaining:
puppetd[2601]: (//Node[stage-and-aus2-server]/stagelayout/File[/builds/data/ftp/pub/b2g/tinderbox-builds]/ensure) change from absent to directory failed: Cannot create /builds/data/ftp/pub/b2g/tinderbox-builds; parent directory /builds/data/ftp/pub/b2g does not exist
Attachment #609981 - Flags: review?(jhford)
Attachment #609981 - Flags: review?(jhford) → review+
Comment on attachment 609981 [details] [diff] [review]
[puppet-manifests] create b2g dir too

http://hg.mozilla.org/build/puppet-manifests/rev/f6169e4fe0cb

deployed & preproduction-master's command queue is cleaned up.
Attachment #609981 - Flags: checked-in+
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: