Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions .ci/unit-tests/puppet-chef.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,20 @@ if [ $PROVIDER = "chef" ]; then
# Re-enable chef tests by deleting this if block once the tests are fixed.
echo "Skipping tests... See issue #236"
elif [ -z "$EXCLUDE_PATTERN" ]; then
DISABLE_COVERAGE=true bundle exec parallel_rspec spec/
# The unit tests are all under e.g. spec/gcompute* - the integration tests
# are better run using `bundle exec rake spec` instead of
# `bundle exec parallel_rspec`, but if you just let it run the default
# set of specs, parallel_rspec will try to run the integration tests
# in addition. This is just running all the tests that aren't excluded -
# or, in the event of an empty exclude list, all the tests with the right
# names.
if ls spec/g$PRODUCT* > /dev/null 2&>1; then
DISABLE_COVERAGE=true bundle exec parallel_rspec spec/g$PRODUCT*
fi
else
# parallel_rspec doesn't support --exclude_pattern
IFS="," read -ra excluded <<< "$EXCLUDE_PATTERN"
filtered=$(find spec -name '*_spec.rb' $(printf "! -wholename %s " ${excluded[@]}))
filtered=$(find spec -name "g$PRODUCT*_spec.rb" $(printf "! -wholename %s " ${excluded[@]}))
DISABLE_COVERAGE=true bundle exec parallel_rspec ${filtered[@]}
fi

Expand Down
16 changes: 16 additions & 0 deletions provider/puppet.rb
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,22 @@ def compile_examples(output_folder)
"products/#{@api.prefix[1..-1]}/examples/puppet/#{file}"]
end
)

# Compile the spec tests which depend on the examples.
# If a create and delete example both exist, generate
# the spec file. Select all the objects which have
# both create and delete examples, named the standard way.
@api.objects.each do |obj|
ex = @config.examples[obj.name]
next if ex.nil?
o = Google::StringUtils.underscore(obj.name)
# We can only generate the spec file if both examples exist.
next unless ex.include?("#{o}.pp") && ex.include?("delete_#{o}.pp")
compile_file_list(output_folder,
[["spec/integration/#{o}/integration_spec.rb",
'templates/puppet/integration_spec.rb']],
obj: obj)
end
end

def compile_end2end_tests(output_folder)
Expand Down
2 changes: 2 additions & 0 deletions provider/puppet/common~compile~before.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
'.gitignore': 'templates/dot~gitignore'
'.rubocop.yml': 'templates/dot~rubocop~root.yml'
'Gemfile': 'templates/puppet/Gemfile'
'.fixtures.yml': 'templates/puppet/fixtures.yml.erb'
'README.md': 'templates/puppet/README.md.erb'
'metadata.json': 'templates/puppet/metadata.json.erb'
'spec/.rubocop.yml': 'templates/dot~rubocop~spec.yml'
Expand All @@ -25,3 +26,4 @@
'spec/puppetlint_spec.rb': 'templates/puppet/puppetlint_spec.rb.erb'
'spec/spec_helper.rb': 'templates/puppet/spec_helper.rb.erb'
'spec/test_constants.rb': 'templates/test_constants.rb.erb'
'spec/fixtures/mock_hc_fn.pp': 'templates/puppet/mock_healthcheck_fn.pp'
1 change: 1 addition & 0 deletions provider/puppet/common~copy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ end
<% end # unless config.nil? -%>
'LICENSE': 'templates/LICENSE'
'Gemfile.lock': 'templates/puppet/Gemfile.lock'
'Rakefile' : 'templates/puppet/Rakefile'
'lib/google/hash_utils.rb': 'google/hash_utils.rb'
'lib/google/string_utils.rb': 'google/string_utils.rb'
'spec/copyright.rb': 'spec/copyright.rb'
Expand Down
7 changes: 6 additions & 1 deletion templates/puppet/Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,10 @@

source 'https://rubygems.org'
group :test do
gem 'google-api-client'
gem 'googleauth'
gem 'metadata-json-lint'
gem 'parallel_tests'
gem 'puppet', ENV['PUPPET_GEM_VERSION'] || '>= 4.2.0'
gem 'puppet-lint'
gem 'puppet-lint-unquoted_string-check'
Expand All @@ -28,7 +31,9 @@ group :test do
gem 'rspec'
gem 'rspec-mocks'
gem 'rspec-puppet'
gem 'parallel_tests'
gem 'rubocop'
gem 'semantic_puppet'
gem 'simplecov'
gem 'vcr'
gem 'webmock'
end
54 changes: 54 additions & 0 deletions templates/puppet/Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,15 @@ GEM
addressable (2.5.2)
public_suffix (>= 2.0.2, < 4.0)
ast (2.4.0)
crack (0.4.3)
safe_yaml (~> 1.0.0)
declarative (0.0.10)
declarative-option (0.1.0)
diff-lcs (1.3)
docile (1.1.5)
facter (2.5.1)
faraday (0.15.2)
multipart-post (>= 1.2, < 3)
fast_gettext (1.1.2)
gettext (3.2.6)
locale (>= 2.0.5)
Expand All @@ -15,17 +21,42 @@ GEM
fast_gettext (~> 1.1.0)
gettext (>= 3.0.2)
locale
google-api-client (0.23.8)
addressable (~> 2.5, >= 2.5.1)
googleauth (>= 0.5, < 0.7.0)
httpclient (>= 2.8.1, < 3.0)
mime-types (~> 3.0)
representable (~> 3.0)
retriable (>= 2.0, < 4.0)
signet (~> 0.9)
googleauth (0.6.6)
faraday (~> 0.12)
jwt (>= 1.4, < 3.0)
memoist (~> 0.12)
multi_json (~> 1.11)
os (>= 0.9, < 2.0)
signet (~> 0.7)
hashdiff (0.3.7)
hiera (3.4.2)
httpclient (2.8.3)
json (2.1.0)
json-schema (2.8.0)
addressable (>= 2.4)
jwt (2.1.0)
locale (2.1.2)
memoist (0.16.0)
metaclass (0.0.4)
metadata-json-lint (2.0.2)
json-schema (~> 2.8)
spdx-licenses (~> 1.0)
mime-types (3.2.2)
mime-types-data (~> 3.2015)
mime-types-data (3.2018.0812)
mocha (1.3.0)
metaclass (~> 0.0.1)
multi_json (1.13.1)
multipart-post (2.0.0)
os (1.0.0)
parallel (1.12.1)
parallel_tests (2.21.3)
parallel
Expand All @@ -50,6 +81,11 @@ GEM
rspec-puppet (~> 2.0)
rainbow (3.0.0)
rake (10.5.0)
representable (3.0.4)
declarative (< 0.1.0)
declarative-option (< 0.2.0)
uber (< 0.2.0)
retriable (3.1.2)
rspec (3.7.0)
rspec-core (~> 3.7.0)
rspec-expectations (~> 3.7.0)
Expand All @@ -73,19 +109,34 @@ GEM
ruby-progressbar (~> 1.7)
unicode-display_width (~> 1.0, >= 1.0.1)
ruby-progressbar (1.9.0)
safe_yaml (1.0.4)
semantic_puppet (1.0.2)
signet (0.9.1)
addressable (~> 2.3)
faraday (~> 0.9)
jwt (>= 1.5, < 3.0)
multi_json (~> 1.10)
simplecov (0.15.1)
docile (~> 1.1.0)
json (>= 1.8, < 3)
simplecov-html (~> 0.10.0)
simplecov-html (0.10.2)
spdx-licenses (1.1.0)
text (1.3.1)
uber (0.1.0)
unicode-display_width (1.3.0)
vcr (4.0.0)
webmock (3.4.2)
addressable (>= 2.3.6)
crack (>= 0.3.2)
hashdiff

PLATFORMS
ruby

DEPENDENCIES
google-api-client
googleauth
metadata-json-lint
parallel_tests
puppet (>= 4.2.0)
Expand All @@ -98,7 +149,10 @@ DEPENDENCIES
rspec-mocks
rspec-puppet
rubocop
semantic_puppet
simplecov
vcr
webmock

BUNDLED WITH
1.16.1
1 change: 1 addition & 0 deletions templates/puppet/Rakefile
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
require 'puppetlabs_spec_helper/rake_tasks'
24 changes: 24 additions & 0 deletions templates/puppet/fixtures.yml.erb
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
<% if false # the license inside this if block pertains to this file -%>
# Copyright 2017 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
<% end -%>
<%= compile 'templates/license.erb' -%>

<%= lines(autogen_notice :ruby) -%>

fixtures:
forge_modules:
gauth: google/gauth
symlinks:
<%= @api.prefix -%>: "#{source_dir}"
130 changes: 130 additions & 0 deletions templates/puppet/integration_spec.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
<% if false # the license inside this if block pertains to this file -%>
# Copyright 2017 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
<% end -%>
<%#
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love this description.

This file is complicated, and deserves some documentation here, so that
you, brave adventurer, can understand what's happened. This file's
goal is to run the examples required by puppet compilation. Essentially,
you can think of it as a shell script that runs 'puppet apply examples/foo.pp',
then 'puppet apply examples/delete_foo.pp', and checks for errors.

It has to be more complicated than that. :)
"Fiddling with puppet internal APIs is a lonely and painful path to take."
- David Schmitt, Puppet SWE, September 2018

First, let's talk about the reason that it *isn't* a shell script.
We need unit tests that really, really validate our puppet modules - we
can't have them breaking on us, because the community isn't active enough
that we can trust that we'll find out through regular channels. We
absolutely must have tests that we can count on to tell us if the module
is working as expected. And those tests need to run on every PR, because
these modules are complicated enough that they can break in subtle ways.
Consequently, we need to be able to run tests that validate use against the
real GCP APIs, but without spending incredible amounts of time (or money)
or actually creating hordes of resources. That means a VCR test.

A VCR test is a kind of test that runs once, making real HTTP requests along
the way. It records the requests and responses. On later runs, it mocks
those same HTTP requests with those same responses, making your tests
repeatable (desirable), fast (desirable), and in our case, also free!

You can't write a test like that using the puppet command line tool, so
instead we plumb into the guts of puppet. Our main hook is a function called
apply_compiled_manifest (and its sibling, apply_with_error_check).

These functions are not really very similar to `puppet apply`. Instead of
taking a filename, they take the entire manifest as a string. This causes
subtle problems deep within Puppet - Puppet depends on the filename of the
manifest in a handful of ways, not least of which is to find out what
module it is part of (to automagically load dependencies and functions).
Since that isn't possible, we tack on the Puppet override of our own
environment and loader. Since these functions don't call out to Facter
the way that `puppet apply` does, we need to substitute in our own project
name and credential path (which all our examples use). Hence the
get_example() function - it does more than just read the file.

You can see all this stuff in spec_helper.rb.erb.

There's also the issue of the begin/rescue pattern you'll see here. It's
ugly! It's also necessary. compile_to_ral() calls out to our mocked up
loader, but fails to successfully compile the manifest due to some
initialization which is mandatory, but isn't possible to complete from
outside puppet. HOWEVER! We're in luck! In calling some cleanup code,
in reporting on the failure, compile_to_ral actually *performs* the
initialization we need. So, if we try it twice, it works the second
time. Lucky.

So! We *can* successfully apply a manifest from
under webmock, as long as we:
- try twice, eating the first failure.
- don't need to use any functions that are loaded from modules
- ensure all our dependent modules have their 'lib' dir in $LOAD_PATH
- do all our own variable substitution
- rewrite any necessary functions in puppet and stick them in
the manifest.
- import all the magic from puppetlabs_spec_helper/module_spec_helper,
which I have done without fully understanding it. The comments in
the code suggest that the authors also do not fully understand it.

Hopefully that explains the complexity here in a way that will continue
to be useful as the tests develop further.

Let me also explain the order in which these commands are run. First,
there is a pre-destroy - in the event that an older test failed to clean
up after itself, the next run should take care of it. Second, we create
the resources in the manifest. Third, we run the create again, mocking
out any requests during 'flush' and making them crash the test - this is
to check idempotency of the manifests. Fourth, we delete the resources
in the manifest, and fifth, we check the idempotency of that delete by
confirming that no resources make web requests during 'flush'.

VCR recordings are called 'cassettes', and they are stored in
'spec/cassettes'. If present, they are used - if absent, created.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we planning on storing the cassettes on github and updating them periodically?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Planning instead on storing them somewhere auth'd, because they include the raw text of the request we send with our service account private key. But that's not part of this PR just yet - I want to get the tests in first, then integrate them.

#-%>
<%= compile 'templates/license.erb' -%>

<%= lines(autogen_notice :puppet) -%>
require 'spec_helper'
require 'vcr'

VCR.configure do |c|
c.cassette_library_dir = 'spec/cassettes'
c.hook_into :webmock
c.configure_rspec_metadata!
end

describe '<%= Google::StringUtils.underscore(obj.name) -%>.create', vcr: true do
it 'creates and destroys non-existent <%= Google::StringUtils.underscore(obj.name) -%>' do
puts 'pre-destroying <%= Google::StringUtils.underscore(obj.name) -%>'
VCR.use_cassette('pre_destroy_<%= Google::StringUtils.underscore(obj.name) -%>') do
run_example('delete_<%= Google::StringUtils.underscore(obj.name) -%>')
end
puts 'creating <%= Google::StringUtils.underscore(obj.name) -%>'
VCR.use_cassette('create_<%= Google::StringUtils.underscore(obj.name) -%>') do
run_example('<%= Google::StringUtils.underscore(obj.name) -%>')
end
puts 'checking that <%= Google::StringUtils.underscore(obj.name) -%> is created'
VCR.use_cassette('check_<%= Google::StringUtils.underscore(obj.name) -%>') do
validate_no_flush_calls('<%= Google::StringUtils.underscore(obj.name) -%>')
end
puts 'destroying <%= Google::StringUtils.underscore(obj.name) -%>'
VCR.use_cassette('destroy_<%= Google::StringUtils.underscore(obj.name) -%>') do
run_example('delete_<%= Google::StringUtils.underscore(obj.name) -%>')
end
puts 'confirming <%= Google::StringUtils.underscore(obj.name) -%> destroyed'
VCR.use_cassette('check_destroy_<%= Google::StringUtils.underscore(obj.name) -%>') do
validate_no_flush_calls('delete_<%= Google::StringUtils.underscore(obj.name) -%>')
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just something to think about:

Imagine I have a Managed Zone with a ResourceRecordSet. I'm testing the ResourceRecordSet in this case.

First, I "destroy" my resource records set to get my environment set up. I don't really care about this network call. If it actually destroys an RRS or doesn't, I don't care. We're testing that part later on. This could make the cassettes weird, as the cassettes would disagree if my environment happened to have the RRS already created.

The second part is the side effects of the "pre-destroy" script. In order to destroy a RRS, I have to verify that my Managed Zone exists. That means, after I call the destroy script, I have no RRS, but I do have a Managed Zone. Even after this entire test suite is run, my Managed Zone will still be lingering.

Here's the pattern I use for Ansible:

  • Build all of the dependent resources (blindly, I don't care what happens because we're not testing those resources)
  • Build the resource in question - test if it exists
  • Build the resource again - test if nothing changed
  • Destroy the resource - test if no longer exists
  • Destroy the resource again - test that nothing changed
  • Destroy all of the dependent resources (blindly, I don't care what happens because we're not testing those resources)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooh, interesting. Several parts of this I have a response to, the rest I'm still chewing on.

This could make the cassettes weird, as the cassettes would disagree if my environment happened to have the RRS already created.

This is okay - if the RRS exists beforehand, the cassette will show a destroy ... but since the cassette also includes the check to see if the RRS exists, it won't ever be inconsistent until the next time we re-record the cassettes.

The second part is the side effects of the "pre-destroy" script. In order to destroy a RRS, I have to verify that my Managed Zone exists. That means, after I call the destroy script, I have no RRS, but I do have a Managed Zone. Even after this entire test suite is run, my Managed Zone will still be lingering.

This isn't great! We have some foo.pp and delete_foo.pp pairs that, if run in sequence, won't produce an empty environment? I think we should probably fix that!

I think your ansible pattern is a good one, but I think that in Puppet I would have a rough time figuring out which resources are the dependents and which are the resource under test just from looking at the manifest as a string ... and by the time I've got it parsed into a RAL, it's too late to pull out pieces of it, I think. Do you think that would work? I could look into it.

What do you advise?

Copy link
Contributor

@rambleraptor rambleraptor Sep 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the first part, that sounds fine. I'm a little unclear how we're checking the cassettes to make sure they work properly (are we checking by hand? when will the tests "crash" because the network calls differ? what about headers, access tokens, and the like?), but I'm assuming that'll all become more clear in due time.

On the second part, I've got a mess of words.

Two parts to this:

  1. How do we run the dependencies? (you probably already know the answer to this)

I think we can do this using good old-fashioned before/after test calls on RSpec.

In the case of RRS - before we run the tests, we run the create-managed-zone example. After we run all of the tests, we run the delete-managed-zone example to clean up. Rspec has before/after test handlers that will make the tests look nice + handle running things before/after for us (you've probably seen/used those before).

This is assuming that the create/delete managed-zone examples use the same name as the create/delete resource-record-set.

  1. How do we figure out the dependencies?

Now, as for coming up with the dependency patterns, we shouldn't let Puppet figure that out. Magic Modules can pretty easily figure that out though. If you look at the autogenerated Chef tests, you'll see that each autogenerated test is placed the proper order. That's because Chef doesn't build out a graph like Puppet does. If you put the Chef blocks in the wrong order, things just crash. I built out a dependency graph that looks at all of the ResourceRefs and figures out what should be run in what order.

We could use that idea and just output tests that explicitly run tasks in a certain order. Puppet wouldn't have to figure out dependencies or anything like that - it's just running tasks in a prescribed order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. That's very handy! I bet that will work great! What a fun idea. I'll poke at it next week - I need tomorrow for some other work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about this some more, and I'm not inclined to implement it the way you suggested at first glance. Let me see if we're on the same page about this.

Problem with the current implementation:

  • dangling resources are possible, or even likely, or even literally-impossible-to-avoid in some resources.
  • every test also implicitly tests its dependencies - failures won't be isolated, making root-causing challenging.

But I see some issues with your methodology as well:

  • all examples are tied together, meaning that a change to improve testing of network will require 10-50 changes throughout the examples directory.
  • all examples need to use the same dependencies, which means that we'll have a real hard time with, e.g. VPN Tunnel testing.
  • it's not possible to test autorequires - we're forcing create-before-create and destroy-after-destroy behavior that we actually want to confirm works correctly.
  • lamely: dependency graph behavior feels complicated and I'm inclined to not add complicated things, because debugging tests is already a pain and I don't want to make it more confusing in there.

So here's my theory. I could rewrite the entire set of examples, splitting them into dependencies and unit-under-test - this is almost done already, honestly, with the 'filename != readme' check that's in most of 'em. Then we could know which tests failed in the before bit and which tests failed in the unit-under-test bit. Then we get rid of the dependency on a delete_foo manifest and just swap the present to absent bit in the unit-under-test, then swap the present to absent bits in the dependencies. 4-step process, most of the perks, few of the flaws.

Downside: that sounds hard to do.
Question: is it worth doing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally understand the issues.

I don't entirely follow your solution

end
end
21 changes: 21 additions & 0 deletions templates/puppet/mock_healthcheck_fn.pp
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
<% if false # the license inside this if block assertains to this file -%>
# Copyright 2017 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
<% end -%>
<%= compile 'templates/license.erb' -%>

<%= lines(autogen_notice :ruby) -%>

function gcompute_health_check_ref(String $name, String $project_name) >> String {
"https://www.googleapis.com/compute/v1/projects/$project_name/global/healthChecks/$name"
}
5 changes: 5 additions & 0 deletions templates/puppet/provider_spec.erb
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@
<%= lines(autogen_notice :puppet) -%>

require 'spec_helper'
require 'network_blocker'
require 'spec/bundle.rb'
require 'spec/copyright.rb'
require 'spec/fake_auth.rb'
require 'spec/test_constants.rb'

<%
if object.readonly
Expand Down
Loading