feat(hos_client_create, hos_client_destory): 多次调用destory不会导致重复释放

This commit is contained in:
彭宣正
2020-12-14 17:24:58 +08:00
parent 505d529c32
commit 10b370e486
55976 changed files with 8544395 additions and 2 deletions

View File

@@ -0,0 +1,41 @@
---
name: "\U0001F41B Bug report"
about: Create a report to help us improve
title: ''
labels: bug, needs-triage
assignees: ''
---
Confirm by changing [ ] to [x] below to ensure that it's a bug:
- [ ] I've gone though [Developer Guide](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/welcome.html) and [API reference](http://sdk.amazonaws.com/cpp/api/LATEST/index.html)
- [ ] I've searched for [previous similar issues](https://github.com/aws/aws-sdk-cpp/issues) and didn't find any solution
**Describe the bug**
A clear and concise description of what the bug is.
**SDK version number**
**Platform/OS/Hardware/Device**
What are you running the sdk on?
**To Reproduce (observed behavior)**
Steps to reproduce the behavior (please share code)
**Expected behavior**
A clear and concise description of what you expected to happen.
**Logs/output**
If applicable, add logs or error output.
To enable logging, set the following system properties:
*REMEMBER TO SANITIZE YOUR PERSONAL INFO*
```
options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Trace;
Aws::InitAPI(options)
```
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: "\U0001F680 Feature request"
about: Suggest an idea for this project
title: ''
labels: feature-request, needs-triage
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -0,0 +1,30 @@
---
name: "\U0001F4AC Questions / Help"
about: If you have questions, please check AWS Forums or StackOverflow
title: ''
labels: guidance, needs-triage
assignees: ''
---
Confirm by changing [ ] to [x] below:
- [ ] I've gone though [Developer Guide](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/welcome.html) and [API reference](http://sdk.amazonaws.com/cpp/api/LATEST/index.html)
- [ ] I've searched for [previous similar issues](https://github.com/aws/aws-sdk-cpp/issues) and didn't find any solution
**Platform/OS/Hardware/Device**
What are you running the sdk on?
**Describe the question**
**Logs/output**
If applicable, add logs or error output.
To enable logging, set the following system properties:
*REMEMBER TO SANITIZE YOUR PERSONAL INFO*
```
options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Trace;
Aws::InitAPI(options)
```

View File

@@ -0,0 +1,21 @@
*Issue #, if available:*
*Description of changes:*
*Check all that applies:*
- [ ] Did a review by yourself.
- [ ] Added proper tests to cover this PR. (If tests are not applicable, explain.)
- [ ] Checked if this PR is a breaking (APIs have been changed) change.
- [ ] Checked if this PR will _not_ introduce cross-platform inconsistent behavior.
- [ ] Checked if this PR would require a ReadMe/Wiki update.
Check which platforms you have built SDK on to verify the correctness of this PR.
- [ ] Linux
- [ ] Windows
- [ ] Android
- [ ] MacOS
- [ ] IOS
- [ ] Other Platforms
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

View File

@@ -0,0 +1,21 @@
name: cspell
on: [push]
jobs:
cspell:
name: cspell
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
with:
path: aws-sdk-cpp
- name: Install Dependencies
run: |
sudo npm install -g cspell
- name: cspell
run: |
cd aws-sdk-cpp
export CSPELL_OUTPUT=$(sudo cspell --no-summary "aws-cpp-sdk-core/**/*.h" "aws-cpp-sdk-core/**/*.cpp")
if [ -n "$CSPELL_OUTPUT" ]; then echo "$CSPELL_OUTPUT" && exit 1; fi;

View File

@@ -0,0 +1,46 @@
name: "Close stale issues"
# Controls when the action will run.
on:
schedule:
- cron: "0 0 * * *"
jobs:
cleanup:
runs-on: ubuntu-latest
name: Stale issue job
steps:
- uses: aws-actions/stale-issue-cleanup@v3
with:
# Setting messages to an empty string will cause the automation to skip
# that category
ancient-issue-message: Greetings! Sorry to say but this is a very old issue that is probably not getting as much attention as it deservers. We encourage you to check if this is still an issue in the latest release and if you find that this is still a problem, please feel free to open a new one.
stale-issue-message: Greetings! It looks like this issue hasnt been active in longer than a week. We encourage you to check if this is still an issue in the latest release. Because it has been longer than a week since the last update on this, and in the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or add an upvote to prevent automatic closure, or if the issue is already closed, please feel free to open a new one.
stale-pr-message: Greetings! It looks like this PR hasnt been active in longer than a week, add a comment or an upvote to prevent automatic closure, or if the issue is already closed, please feel free to open a new one.
# These labels are required
stale-issue-label: closing-soon
exempt-issue-label: automation-exempt
stale-pr-label: closing-soon
exempt-pr-label: pr/needs-review
response-requested-label: response-requested
# Don't set closed-for-staleness label to skip closing very old issues
# regardless of label
closed-for-staleness-label: closed-for-staleness
# Issue timing
days-before-stale: 7
days-before-close: 4
days-before-ancient: 365
# If you don't want to mark a issue as being ancient based on a
# threshold of "upvotes", you can set this here. An "upvote" is
# the total number of +1, heart, hooray, and rocket reactions
# on an issue.
minimum-upvotes-to-exempt: 1
repo-token: ${{ secrets.GITHUB_TOKEN }}
loglevel: DEBUG
# Set dry-run to true to not perform label or close actions.
dry-run: false

83
support/aws-sdk-cpp-master/.gitignore vendored Normal file
View File

@@ -0,0 +1,83 @@
# IDE Artifacts
.metadata
.build
.idea
*.d
compile_commands.json
Debug
Release
*~
*#
*.iml
tags
#vim swap file
*.swp
#compiled python files
*.pyc
#Vagrant stuff
Vagrantfile
.vagrant
#Mac stuff
.DS_Store
#doxygen
doxygen/html/
doxygen/latex/
#cmake artifacts
dependencies
_build
build
_build_*
# Compiled Object files
*.slo
*.lo
*.o
*.obj
# Precompiled Headers
*.gch
*.pch
# Compiled Dynamic libraries
*.so
*.dylib
*.dll
# Fortran module files
*.mod
# Compiled Static libraries
*.lai
*.la
*.a
*.lib
# Executables
*.exe
*.out
*.app
# Android Junk
AndroidTestOutput.txt
curl
external
openssl
zlib
credentials
toolchains/android/
# codegen
code-generation/generator/target/
#config output
aws-cpp-sdk-core/include/aws/core/SDKConfig.h
#nuget
*.nupkg

View File

@@ -0,0 +1,9 @@
*.iml
.gradle
/local.properties
/.idea/workspace.xml
/.idea/libraries
.DS_Store
/build
/captures
.externalNativeBuild

View File

@@ -0,0 +1 @@
/build

View File

@@ -0,0 +1,29 @@
apply plugin: 'com.android.application'
android {
compileSdkVersion 22
buildToolsVersion "21.1.2"
defaultConfig {
applicationId "aws.androidsdktesting"
minSdkVersion 21
targetSdkVersion 22
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations'
})
compile 'com.android.support:appcompat-v7:22.2.1'
testCompile 'junit:junit:4.12'
}

View File

@@ -0,0 +1,17 @@
# Add project specific ProGuard rules here.
# By default, the flags in this file are appended to flags specified
# in /home/local/ANT/bambrose/Android/Sdk/tools/proguard/proguard-android.txt
# You can edit the include path and order by changing the proguardFiles
# directive in build.gradle.
#
# For more details, see
# http://developer.android.com/guide/developing/tools/proguard.html
# Add any project specific keep options here:
# If your project uses WebView with JS, uncomment the following
# and specify the fully qualified class name to the JavaScript interface
# class:
#-keepclassmembers class fqcn.of.javascript.interface.for.webview {
# public *;
#}

View File

@@ -0,0 +1,26 @@
package aws.androidsdktesting;
import android.content.Context;
import android.support.test.InstrumentationRegistry;
import android.support.test.runner.AndroidJUnit4;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.*;
/**
* Instrumentation test, which will execute on an Android device.
*
* @see <a href="http://d.android.com/tools/testing">Testing documentation</a>
*/
@RunWith(AndroidJUnit4.class)
public class ExampleInstrumentedTest {
@Test
public void useAppContext() throws Exception {
// Context of the app under test.
Context appContext = InstrumentationRegistry.getTargetContext();
assertEquals("aws.androidsdktesting", appContext.getPackageName());
}
}

View File

@@ -0,0 +1,24 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="aws.androidsdktesting">
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".RunSDKTests">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>

View File

@@ -0,0 +1,184 @@
package aws.androidsdktesting;
import android.app.Activity;
import android.content.Context;
import android.os.AsyncTask;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
public class RunSDKTests extends AppCompatActivity {
class TestTask extends AsyncTask<String, Void, Boolean> {
private Activity m_source;
private Map< String, ArrayList< String > > m_testLibraryDependencies;
void InitializeLibraryDependencies()
{
m_testLibraryDependencies = new HashMap< String, ArrayList< String > >();
ArrayList< String > coreDependencies = new ArrayList< String >();
coreDependencies.add("runCoreUnitTests");
m_testLibraryDependencies.put("core", coreDependencies);
ArrayList< String > cloudfrontDependencies = new ArrayList< String >();
cloudfrontDependencies.add("aws-cpp-sdk-cloudfront");
cloudfrontDependencies.add("runCloudfrontIntegrationTests");
m_testLibraryDependencies.put("cloudfront", cloudfrontDependencies);
ArrayList< String > cognitoidentityDependencies = new ArrayList< String >();
cognitoidentityDependencies.add("aws-cpp-sdk-cognito-identity");
cognitoidentityDependencies.add("aws-cpp-sdk-iam");
cognitoidentityDependencies.add("aws-cpp-sdk-access-management");
cognitoidentityDependencies.add("runCognitoIntegrationTests");
m_testLibraryDependencies.put("cognito-identity", cognitoidentityDependencies);
ArrayList< String > dynamodbDependencies = new ArrayList< String >();
dynamodbDependencies.add("aws-cpp-sdk-dynamodb");
dynamodbDependencies.add("runDynamoDBIntegrationTests");
m_testLibraryDependencies.put("dynamodb", dynamodbDependencies);
ArrayList< String > identityDependencies = new ArrayList< String >();
identityDependencies.add("aws-cpp-sdk-identity-management");
identityDependencies.add("runIdentityManagementTests");
m_testLibraryDependencies.put("identity", identityDependencies);
ArrayList< String > lambdaDependencies = new ArrayList< String >();
lambdaDependencies.add("aws-cpp-sdk-kinesis");
lambdaDependencies.add("aws-cpp-sdk-lambda");
lambdaDependencies.add("aws-cpp-sdk-cognito-identity");
lambdaDependencies.add("aws-cpp-sdk-iam");
lambdaDependencies.add("aws-cpp-sdk-access-management");
lambdaDependencies.add("runLambdaManagementTests");
m_testLibraryDependencies.put("lambda", lambdaDependencies);
ArrayList< String > loggingDependencies = new ArrayList< String >();
loggingDependencies.add("aws-cpp-sdk-s3");
loggingDependencies.add("aws-cpp-sdk-logging");
loggingDependencies.add("runLoggingIntegrationTests");
m_testLibraryDependencies.put("logging", loggingDependencies);
ArrayList< String > redshiftDependencies = new ArrayList< String >();
redshiftDependencies.add("aws-cpp-sdk-redshift");
redshiftDependencies.add("runRedshiftIntegrationTests");
m_testLibraryDependencies.put("redshift", redshiftDependencies);
ArrayList< String > s3Dependencies = new ArrayList< String >();
s3Dependencies.add("aws-cpp-sdk-s3");
s3Dependencies.add("runS3IntegrationTests");
m_testLibraryDependencies.put("s3", s3Dependencies);
ArrayList< String > sqsDependencies = new ArrayList< String >();
sqsDependencies.add("aws-cpp-sdk-sqs");
sqsDependencies.add("aws-cpp-sdk-cognito-identity");
sqsDependencies.add("aws-cpp-sdk-iam");
sqsDependencies.add("aws-cpp-sdk-access-management");
sqsDependencies.add("runSqsIntegrationTests");
m_testLibraryDependencies.put("sqs", sqsDependencies);
ArrayList< String > transferDependencies = new ArrayList< String >();
transferDependencies.add("aws-cpp-sdk-s3");
transferDependencies.add("aws-cpp-sdk-transfer");
transferDependencies.add("runTransferIntegrationTests");
m_testLibraryDependencies.put("transfer", transferDependencies);
ArrayList< String > unifiedDependencies = new ArrayList< String >();
unifiedDependencies.add("android-unified-tests");
m_testLibraryDependencies.put("unified", unifiedDependencies);
}
public TestTask(Activity taskSource)
{
m_source = taskSource;
}
protected Boolean doInBackground(String... testNames)
{
InitializeLibraryDependencies();
String testName = testNames[ 0 ];
Log.i("AwsNativeSDK", "Running test " + testName);
if(!testName.equals("unified"))
{
Log.i("AwsNativeSDK", "Loading common libraries ");
//System.loadLibrary("c");
try {
System.loadLibrary("c++_shared");
} catch (Exception e) {
;
}
try {
System.loadLibrary("gnustl_shared");
} catch (Exception e) {
;
}
System.loadLibrary("log");
System.loadLibrary("aws-cpp-sdk-core");
System.loadLibrary("testing-resources");
}
ArrayList< String > testLibraries = m_testLibraryDependencies.get( testName );
if(testLibraries == null)
{
Log.i("AwsNativeSDK", "Test " + testName + " does not exist!");
return false;
}
Log.i("AwsNativeSDK", "Loading test libraries ");
for(String testLibraryName : testLibraries)
{
Log.i("AwsNativeSDK", "Loading library " + testLibraryName);
System.loadLibrary(testLibraryName);
}
Log.i("AwsNativeSDK", "Starting tests");
boolean success = runTests((Context)m_source) == 0;
if(success) {
Log.i("AwsNativeSDK", "Tests Succeeded!");
} else {
Log.i("AwsNativeSDK", "Tests Failed =(");
}
return success;
}
protected void onPostExecute(Boolean testsSucceeded)
{
Log.i("AwsNativeSDK", "Shutting down TestActivity");
m_source.finish();
}
}
static public native int runTests(Context context);
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_run_sdktests);
String testName = getIntent().getStringExtra("test");
if(testName == null) {
testName = "unified";
}
new TestTask(this).execute(testName);
}
@Override
public void onDestroy()
{
super.onDestroy();
Log.i("AwsNativeSDK", "OnDestroy called!");
}
}

View File

@@ -0,0 +1,17 @@
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/activity_run_sdktests"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
tools:context="aws.androidsdktesting.RunSDKTests">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello World!" />
</RelativeLayout>

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@@ -0,0 +1,6 @@
<resources>
<!-- Example customization of dimensions originally defined in res/values/dimens.xml
(such as screen margins) for screens with more than 820dp of available width. This
would include 7" and 10" devices in landscape (~960dp and ~1280dp respectively). -->
<dimen name="activity_horizontal_margin">64dp</dimen>
</resources>

View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="colorPrimary">#3F51B5</color>
<color name="colorPrimaryDark">#303F9F</color>
<color name="colorAccent">#FF4081</color>
</resources>

View File

@@ -0,0 +1,5 @@
<resources>
<!-- Default screen margins, per the Android Design guidelines. -->
<dimen name="activity_horizontal_margin">16dp</dimen>
<dimen name="activity_vertical_margin">16dp</dimen>
</resources>

View File

@@ -0,0 +1,3 @@
<resources>
<string name="app_name">AndroidSDKTesting</string>
</resources>

View File

@@ -0,0 +1,11 @@
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
<!-- Customize your theme here. -->
<item name="colorPrimary">@color/colorPrimary</item>
<item name="colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="colorAccent">@color/colorAccent</item>
</style>
</resources>

View File

@@ -0,0 +1,17 @@
package aws.androidsdktesting;
import org.junit.Test;
import static org.junit.Assert.*;
/**
* Example local unit test, which will execute on the development machine (host).
*
* @see <a href="http://d.android.com/tools/testing">Testing documentation</a>
*/
public class ExampleUnitTest {
@Test
public void addition_isCorrect() throws Exception {
assertEquals(4, 2 + 2);
}
}

View File

@@ -0,0 +1,23 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.2.2'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}

View File

@@ -0,0 +1,17 @@
# Project-wide Gradle settings.
# IDE (e.g. Android Studio) users:
# Gradle settings configured through the IDE *will override*
# any settings specified in this file.
# For more details on how to configure your build environment visit
# http://www.gradle.org/docs/current/userguide/build_environment.html
# Specifies the JVM arguments used for the daemon process.
# The setting is particularly useful for tweaking memory settings.
org.gradle.jvmargs=-Xmx1536m
# When configured, Gradle will run in incubating parallel mode.
# This option should only be used with decoupled projects. More details, visit
# http://www.gradle.org/docs/current/userguide/multi_project_builds.html#sec:decoupled_projects
# org.gradle.parallel=true

View File

@@ -0,0 +1,6 @@
#Mon Dec 28 10:00:20 PST 2015
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-2.14.1-all.zip

View File

@@ -0,0 +1,160 @@
#!/usr/bin/env bash
##############################################################################
##
## Gradle start up script for UN*X
##
##############################################################################
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS=""
APP_NAME="Gradle"
APP_BASE_NAME=`basename "$0"`
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"
warn ( ) {
echo "$*"
}
die ( ) {
echo
echo "$*"
echo
exit 1
}
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
case "`uname`" in
CYGWIN* )
cygwin=true
;;
Darwin* )
darwin=true
;;
MINGW* )
msys=true
;;
esac
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
SAVED="`pwd`"
cd "`dirname \"$PRG\"`/" >/dev/null
APP_HOME="`pwd -P`"
cd "$SAVED" >/dev/null
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" ] ; then
MAX_FD_LIMIT=`ulimit -H -n`
if [ $? -eq 0 ] ; then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ] ; then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi
# For Darwin, add options to specify how the application appears in the dock
if $darwin; then
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi
# For Cygwin, switch paths to Windows format before running java
if $cygwin ; then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
JAVACMD=`cygpath --unix "$JAVACMD"`
# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
SEP=""
for dir in $ROOTDIRSRAW ; do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN" != "" ] ; then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
for arg in "$@" ; do
CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option
if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition
eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
else
eval `echo args$i`="\"$arg\""
fi
i=$((i+1))
done
case $i in
(0) set -- ;;
(1) set -- "$args0" ;;
(2) set -- "$args0" "$args1" ;;
(3) set -- "$args0" "$args1" "$args2" ;;
(4) set -- "$args0" "$args1" "$args2" "$args3" ;;
(5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
(6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
(7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
(8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
(9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
esac
fi
# Split up the JVM_OPTS And GRADLE_OPTS values into an array, following the shell quoting and substitution rules
function splitJvmOpts() {
JVM_OPTS=("$@")
}
eval splitJvmOpts $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS
JVM_OPTS[${#JVM_OPTS[*]}]="-Dorg.gradle.appname=$APP_BASE_NAME"
exec "$JAVACMD" "${JVM_OPTS[@]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$@"

View File

@@ -0,0 +1,90 @@
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS=
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto init
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto init
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:init
@rem Get command-line arguments, handling Windowz variants
if not "%OS%" == "Windows_NT" goto win9xME_args
if "%@eval[2+2]" == "4" goto 4NT_args
:win9xME_args
@rem Slurp the command line arguments.
set CMD_LINE_ARGS=
set _SKIP=2
:win9xME_args_slurp
if "x%~1" == "x" goto execute
set CMD_LINE_ARGS=%*
goto execute
:4NT_args
@rem Get arguments from the 4NT Shell from JP Software
set CMD_LINE_ARGS=%$
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%
:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega

View File

@@ -0,0 +1 @@
include ':app'

View File

@@ -0,0 +1,70 @@
# Breaking changes in AWS SDK for C++
## [1.8.0](https://github.com/aws/aws-sdk-cpp/tree/1.8.0) (2020-06-30)
Check our [Wiki](https://github.com/aws/aws-sdk-cpp/wiki/What%E2%80%99s-New-in-AWS-SDK-for-C---Version-1.8) for a comprehensive list of the featuers introduced on this version.
## [1.7.0](https://github.com/aws/aws-sdk-cpp/tree/1.7.0) (2018-11-15)
### aws-cpp-sdk-core
Add new dependencies: [aws-c-common](https://github.com/awslabs/aws-c-common), [aws-checksums](https://github.com/awslabs/aws-checksums) and [aws-c-event-stream](https://github.com/awslabs/aws-c-event-stream) to support S3 select streaming API. The API is implemented in C99 via libraries that are developed by AWS as well.
These libraries are downloaded and built as part of the CMake configure step. That can be disabled via the new switch `-DBUILD_DEPS=OFF`. The switch is set to ON by default.
### aws-cpp-sdk-s3
Add support for S3 `SelectContentObject` API.
## [1.6.0](https://github.com/aws/aws-sdk-cpp/tree/1.6.0) (2018-08-28)
### aws-cpp-sdk-core
Code for future SDK instrumentation and telemetry
## [1.5.0](https://github.com/aws/aws-sdk-cpp/tree/1.5.0) (2018-07-25)
### aws-cpp-sdk-core
`cJSON` is now the underlying JSON parser, replacing JsonCpp.
`JsonValue` is now strictly a DOM manipulation class. All reads and serialization must be done through the new
`JsonView` class. The `JsonView` is lightweight and follows the `string_view` concept from C++17 such that, it does not
extend the lifetime of its underlying DOM (the `JsonValue`).
## [1.4.0](https://github.com/aws/aws-sdk-cpp/tree/1.4.0) (2018-02-19)
### aws-cpp-sdk-s3
Fixed bug in Aws::S3::Model::CopyObjectResult, added CopyObjectResultDetails as a member of CopyObjectResult.
We were missing a member of CopyObjectResult because of name conflict and related files are overwritten when we generate the source code.
We renamed this member to CopyObjectResultDetails.
### aws-cpp-sdk-config
Removed unused enum values.
From the service release notes:
> AWS Config updated the ConfigurationItemStatus enum values. The values prior to this update did not represent appropriate values returned by GetResourceConfigHistory. You must update your code to enumerate the new enum values so this is a breaking change. To map old properties to new properties, use the following descriptions: New discovered resource - Old property: Discovered, New property: ResourceDiscovered. Updated resource - Old property: Ok, New property: OK. Deleted resource - Old property: Deleted, New property: ResourceDeleted or ResourceDeletedNotRecorded. Not-recorded resource - Old property: N/A, New property: ResourceNotRecorded or ResourceDeletedNotRecorded.
## [1.3.0](https://github.com/aws/aws-sdk-cpp/tree/1.3.0) (2017-11-09)
### aws-cpp-sdk-s3
Changed the constructor of AWSAuthV4Signer to use PayloadSigningPolicy instead of a boolean.
## [1.2.0](https://github.com/aws/aws-sdk-cpp/tree/1.2.0) (2017-09-24)
### aws-cpp-sdk-transfer
Changed ownership of thread executor in TransferManager.
## [1.1.1](https://github.com/aws/aws-sdk-cpp/tree/1.1.1) (2017-06-22)
### aws-cpp-sdk-transfer
Introduced a builder function to instantiate TransferManager
as a shared_ptr. That ensures that other threads can increase
TransferManager's lifetime until all the callbacks have finished.

View File

@@ -0,0 +1,54 @@
#!/bin/bash
set -eu
if [[ $# -lt 1 ]]; then
echo -e "error: missing location paramter.\n"
echo -e "USAGE: BuildMyCode [OPTIONS]\n"
echo "OPTIONS:"
echo "-b|--branch The name of the git branch. Default is the current branch."
echo "-c|--cmake-flags Any additional CMake flags to pass to the build jobs."
echo "-l|--location The name of key in S3 under which to save the BuildSpec.zip file."
exit 1
fi
branch=""
cmakeFlags=""
# default to the current branch
if [[ -z $branch ]]; then
branch=$(git rev-parse --abbrev-ref HEAD)
fi
POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"
case $key in
-b|--branch)
branch=$2
shift # past argument
;;
-c|--cmake-flags)
cmakeFlags=$2
shift # past argument
;;
-l|--location) # where to put the buildspec.zip file
buildspecLocation=$2
shift # past argument
;;
*) # unknown option
POSITIONAL+=("$1") # save it in an array for later
shift # past argument
;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters
json='{ "branch": "'$branch'", "cmakeFlags": "'$cmakeFlags' "}'
echo "$json" >BuildSpec.json
zip -r BuildSpec.zip BuildSpec.json
aws s3 cp BuildSpec.zip s3://aws-sdk-cpp-dev-pipeline/"${buildspecLocation}"/BuildSpec.zip
S3VERSION=$(aws s3api head-object --bucket aws-sdk-cpp-dev-pipeline --key "${buildspecLocation}"/BuildSpec.zip | awk '/VersionId/{gsub(/[",]/, ""); print $2}')
echo -e "\033[30;42mYour build version ID is ${S3VERSION}\033[0m"

View File

@@ -0,0 +1,40 @@
#!/usr/bin/python
import argparse
import shutil
import subprocess
import re
import subprocess
import os
import zipfile
import io
import json
def Main():
parser = argparse.ArgumentParser(description="Creates a release doc based on a list of changes.")
parser.add_argument("--changesList", action="store")
args = vars( parser.parse_args() )
changes = args["changesList"]
changeDoc = {}
changeList = changes.split()
releases = []
release = {}
features = []
for change in changeList:
feature = {}
feature["service-name"] = change.replace("aws-cpp-sdk-", "")
features.append(feature)
release["features"] = features
releases.append(release)
changeDoc["releases"] = releases
print(json.dumps(changeDoc))
Main()

View File

@@ -0,0 +1,20 @@
#!/bin/bash
FILES_CHANGED_STR=`git diff --name-only $1@{1} $1`
FILES_CHANGED=${FILES_CHANGED_STR}
declare -A DIRS_SET
for FILE in $FILES_CHANGED ; do
DIR=`echo $FILE | cut -d "/" -f1`
if test "${DIRS_SET[${DIR}]+isset}"
then
continue
else
echo $DIR
fi
DIRS_SET[${DIR}]=""
done

View File

@@ -0,0 +1,21 @@
#!/bin/bash
CURDIR="$(dirname "$(readlink -f "$0")")"
CHANGED_DIRS=`$CURDIR/DetectDirectoryChanges $1`
case $CHANGED_DIRS in
*"aws-cpp-sdk-core"*)
;&
*"CMakeLists.txt"*)
;&
*"cmake"*)
;&
*"code-generation"*)
echo "-DBUILD_ONLY=\"\""
exit 0
;;
*)
esac
BUILD_ONLY_OUT="-DBUILD_ONLY=\"${CHANGED_DIRS//$'\n'/';'}\""
echo ${BUILD_ONLY_OUT//$'aws-cpp-sdk-'/''}

View File

@@ -0,0 +1,17 @@
#!/usr/bin/python
import sys
import json
if len(sys.argv) != 2:
print >> sys.stderr, " Usage: python ExtractBuildArgs.py <ArgName>"
exit (-1)
try:
data = json.load(open('BuildSpec.json'))
if sys.argv[1] == "cmakeFlags" and data["cmakeFlags"] != "":
print(data["cmakeFlags"])
elif sys.argv[1] == "branch" and data["branch"] != "":
print(data["branch"])
except:
print >> sys.stderr, "No related args found in BuildSpec.json"
exit(-1)

View File

@@ -0,0 +1,9 @@
#!/bin/bash
branch =$(python aws-sdk-cpp/CI/ExtractBuildArgs.py branch)
git clone git@github.com:awslabs/aws-sdk-cpp-staging.git aws-sdk-cpp
cd aws-sdk-cpp
git reset --hard HEAD
git checkout master
git pull
git checkout $branch

View File

@@ -0,0 +1,22 @@
version: 0.2
phases:
build:
commands:
- mv aws-sdk-cpp /tmp
- mkdir /tmp/build
- cd /tmp/build
- python /tmp/aws-sdk-cpp/scripts/build_3rdparty.py --configs="${BUILD_CONFIG}" --sourcedir=/tmp/aws-sdk-cpp/ --parallel=${BUILD_PARALLEL} --installdir=/tmp/install --generateClients="0" --architecture=${ARCHITECTURE} --cmake_params="-DMINIMIZE_SIZE=ON -DANDROID_NATIVE_API_LEVEL=${API_LEVEL}"
post_build:
commands:
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "1" ]; then
aws sns publish --topic-arn ${NOTIFICATIONS_TOPIC} --message "/md [BUILD SUCCESS](${BUILD_URL}) (${CODEBUILD_BUILD_ID})";
else
aws sns publish --topic-arn ${NOTIFICATIONS_TOPIC} --message "/md [BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})";
fi
artifacts:
files:
- "**/*"
base-directory: /tmp/install

View File

@@ -0,0 +1,19 @@
version: 0.2
phases:
build:
commands:
- VERSION_NUM=$(grep AWS_SDK_VERSION_STRING aws-sdk-cpp/aws-cpp-sdk-core/include/aws/core/VersionConfig.h | cut -f2 -d '"')
- echo $VERSION_NUM | tee aws-sdk-cpp-version
post_build:
commands:
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "1" ]; then
aws sns publish --topic-arn ${NOTIFICATIONS_TOPIC} --message "/md [BUILD SUCCESS](${BUILD_URL}) (Extract Metadata)";
else
aws sns publish --topic-arn ${NOTIFICATIONS_TOPIC} --message "/md [BUILD FAILURE](${BUILD_URL}) (Extract Metadata)";
fi
artifacts:
files:
- "aws-sdk-cpp-version"

View File

@@ -0,0 +1,11 @@
version: 0.2
phases:
build:
commands:
- cd ..
- zip -r latestSnapshot.zip aws-sdk-cpp
- mv latestSnapshot.zip $CODEBUILD_SRC_DIR
- cd $CODEBUILD_SRC_DIR
artifacts:
files:
- latestSnapshot.zip

View File

@@ -0,0 +1,22 @@
version: 0.2
phases:
build:
commands:
- mv aws-sdk-cpp /tmp
- mkdir /tmp/build
- cd /tmp/build
- python /tmp/aws-sdk-cpp/scripts/build_3rdparty.py --configs="${BUILD_CONFIG}" --sourcedir=/tmp/aws-sdk-cpp/ --parallel=${BUILD_PARALLEL} --installdir=/tmp/install --generateClients="0" --cmake_params=""
post_build:
commands:
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "1" ]; then
aws sns publish --topic-arn ${NOTIFICATIONS_TOPIC} --message "/md [BUILD SUCCESS](${BUILD_URL}) (${CODEBUILD_BUILD_ID})";
else
aws sns publish --topic-arn ${NOTIFICATIONS_TOPIC} --message "/md [BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})";
fi
artifacts:
files:
- "**/*"
base-directory: /tmp/install

View File

@@ -0,0 +1,23 @@
version: 0.2
phases:
build:
commands:
- mkdir C:\tmp
- mv aws-sdk-cpp C:\tmp
- mkdir C:\tmp\build
- cd C:\tmp\build
- python "C:\tmp\aws-sdk-cpp\scripts\build_3rdparty.py" --architecture=${Env:ARCHITURE} --configs="${Env:BUILD_CONFIG}" --sourcedir="C:\tmp\aws-sdk-cpp" --parallel=${Env:BUILD_PARALLEL} --installdir="C:\tmp\install" --generateClients="0" --cmake_params=""
post_build:
commands:
- $BUILD_JOB_NAME=$Env:CODEBUILD_BUILD_ID.Substring(0, $Env:CODEBUILD_BUILD_ID.IndexOf(":"))
- $BUILD_URL="https://console.aws.amazon.com/codesuite/codebuild/projects/$BUILD_JOB_NAME/build/$Env:CODEBUILD_BUILD_ID"
- |
if (${Env:CODEBUILD_BUILD_SUCCEEDING} -eq 1) {
aws sns publish --topic-arn ${Env:NOTIFICATIONS_TOPIC} --message "/md [BUILD SUCCESS](${BUILD_URL}) (${Env:CODEBUILD_BUILD_ID})"
} Else {
aws sns publish --topic-arn ${Env:NOTIFICATIONS_TOPIC} --message "/md [BUILD FAILURE](${BUILD_URL}) (${Env:CODEBUILD_BUILD_ID})"
}
artifacts:
files:
- "**/*"
base-directory: C:\tmp\install

View File

@@ -0,0 +1,90 @@
# Whenever you make any change here, you should update it in Amazon S3.
# This lambda function is used to publish binaries and make notifications in binary release pipeline.
# It will copy the binaries generated in each pipeline action from a temporary location (provided by its inputs) to a specific s3 bucket for customer download.
# In the "Publish" stage, each lambda function is responsible for uploading binaries for one platform.
import boto3
import json
import os
import zipfile
from botocore.client import Config
def lambda_handler(event, context):
print(event)
job_id = event['CodePipeline.job']['id']
sns_client = boto3.client('sns')
codepipeline_client = boto3.client('codepipeline')
try:
parameters = json.loads(event['CodePipeline.job']['data']['actionConfiguration']['configuration']['UserParameters'])
publish_bucket = parameters['bucket']
publish_key_prefix = parameters['key_prefix']
# Get SDK version
input_bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
input_key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
s3.meta.client.download_file(input_bucket, input_key, '/tmp/aws-sdk-cpp-version.zip')
with zipfile.ZipFile('/tmp/aws-sdk-cpp-version.zip', 'r') as zip:
zip.extractall('/tmp')
with open('/tmp/aws-sdk-cpp-version', 'r') as fp:
sdk_version = fp.read().strip()
# Copy SDK binaries to public bucket
input_artifacts = event['CodePipeline.job']['data']['inputArtifacts']
for i in range(1, len(input_artifacts)):
artifact_name = input_artifacts[i]['name']
config = artifact_name[artifact_name.find('_')+1:]
publish_key = 'cpp/builds/{version}/{prefix}/{prefix}-{config}.zip'.format(
version = sdk_version,
prefix = publish_key_prefix,
config = config
)
print('Uploading artifacts to https://s3.console.aws.amazon.com/s3/object/{bucket}/{key}'.format(
bucket = publish_bucket,
key = publish_key))
s3.meta.client.copy(
{ 'Bucket': input_artifacts[i]['location']['s3Location']['bucketName'],
'Key': input_artifacts[i]['location']['s3Location']['objectKey'] },
publish_bucket, publish_key)
# Notifications
sns_response = sns_client.publish(
TopicArn = os.environ['NOTIFICATIONS_TOPIC'],
Message = '/md [PUBLISH SUCCESS]({url}) ({prefix})'.format(
url = 'https://s3.console.aws.amazon.com/s3/buckets/{bucket}/cpp/builds/{version}/{prefix}/'.format(
bucket = publish_bucket,
version = sdk_version,
prefix = publish_key_prefix
),
prefix = publish_key_prefix
)
)
print(sns_response)
codepipeline_client.put_job_success_result(
jobId = job_id
)
except Exception as e:
codepipeline_client.put_job_failure_result(
jobId = job_id,
failureDetails = {
'type': 'JobFailed',
'message': str(e)
}
)
sns_response = sns_client.publish(
TopicArn = os.environ['NOTIFICATIONS_TOPIC'],
Message = '/md [PUBLISH FAILURE]({url}) ({prefix})'.format(
url = 'https://s3.console.aws.amazon.com/s3/buckets/{bucket}/cpp/builds/{version}/{prefix}/'.format(
bucket = publish_bucket,
version = sdk_version,
prefix = publish_key_prefix
),
prefix = publish_key_prefix
)
)
print(sns_response)
print(e)
return 0

View File

@@ -0,0 +1,36 @@
# Whenever you make any change here, you should update it in Amazon S3.
# In binary release pipeline, build jobs will send the results to a SNS topic.
# And this lambda function, triggered by this SNS notifications, will send messages about the build results to a Chime room.
# Other functionality could be added in the future, like put metrics to CloudWatch or trigger another alarm.
import boto3
import json
import os
from botocore.vendored import requests
chime_bot_url = os.environ['CHIME_BOT_URL']
def lambda_handler(event, context):
print(event)
message = event["Records"][0]["Sns"]["Message"]
headers = {'Content-Type': 'application/json'}
data = {}
if "FAILURE" in message:
# @All Members if build failed.
# Will convert '/md [message]' to '/md @All[message]'
firstSpaceIndex = message.find(' ')
message = message[:firstSpaceIndex+1] + '@All' + message[firstSpaceIndex+1:]
make_request = True
elif 'SUCCESS' in message:
make_request = True
if make_request == True:
data["Content"] = message
r = requests.post(chime_bot_url, headers = headers, data = json.dumps(data))
return r.reason
else:
return 0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,308 @@
# Whenever you make any change here, you should update it in Amazon S3.
# This CloudFormation template is used to create resources for CodeBuild projects to build C++ SDK on Linux and Windows with both Visual Studio 2015 and 2017.
# It's a sub-template used in the main template to create binary release pipeline.
AWSTemplateFormatVersion: 2010-09-09
Parameters:
BuildConfig:
Type: String
Default: <build-config>
Description: Build config when building SDK on Linux and Windows.
BinaryReleaseResultNotificationsTopic:
Type: String
Default: <binary-release-result-notifications-topic>
Description: Topic ARN of the SNS, used to handle notifications received from lambda functions.
BinaryReleaseCodeBuildRole:
Type: String
Default: <binary-release-codebuild-role>
Description: Name of the service role used by CodeBuild projects used to build SDK.
ParameterStoreAwsAccessKeyId:
Type: String
Default: <parameter-store-aws-access-key-id>
Description: Key name in Parameter Store, used for aws access key id.
ParameterStoreAwsSecretAccessKey:
Type: String
Default: <parameter-store-aws-secret-access-key>
Description: Key name in Parameter Store, used for aws secret access key.
LinuxGccProjectName:
Type: String
Default: <linux-gcc-project-name>
Description: Name of the CodeBuild project, which will build C++ SDK on Linux with GCC.
LinuxGccImageName:
Type: String
Default: <linux-gcc-image-name>
Description: Name of the image used in the CodeBuild Project to build SDK on Linux with GCC.
LinuxGccBuildSpecLocation:
Type: String
Default: <linux-gcc-buildspec-location>
Description: Location of buildspec for CodeBuild Project to build SDK on Linux with GCC.
WindowsProjectName:
Type: String
Default: <windows-project-name>
Description: Name of the CodeBuild project, which will build C++ SDK on Windows.
WindowsVS2015ImageName:
Type: String
Default: <windows-vs2015-image-name>
Description: Name of the image used in the CodeBuild Project to build SDK on Windows with VS2015.
WindowsVS2017ImageName:
Type: String
Default: <windows-vs2017-image-name>
Description: Name of the image used in the CodeBuild Project to build SDK on Windows with VS2017.
WindowsBuildSpecLocation:
Type: String
Default: <windows-buildspec-location>
Description: Location of buildspec for CodeBuild Project to build SDK on Windows.
AndroidProjectName:
Type: String
Default: <android-project-name>
Description: Name of the CodeBuild project, which cross compiles C++ SDK on Linux with Android NDK.
AndroidBuildSpecLocation:
Type: String
Default: <android-buildspec-location>
Description: Location of buildspec for CodeBuild Project to build SDK with Android NDK.
BuildParallel:
Type: String
Default: <build-parallel>
Description: Number of jobs in parallel to build C++ SDK.
Resources:
LinuxGccProject:
Type: AWS::CodeBuild::Project
Properties:
Name:
!Join
- '-'
- - !Ref LinuxGccProjectName
- !Ref BuildConfig
ServiceRole: !Ref BinaryReleaseCodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref LinuxGccBuildSpecLocation
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_LARGE
Image: !Ref LinuxGccImageName
EnvironmentVariables:
- Name: BUILD_CONFIG
Type: PLAINTEXT
Value: !Ref BuildConfig
- Name: BUILD_PARALLEL
Type: PLAINTEXT
Value: !Ref BuildParallel
- Name: NOTIFICATIONS_TOPIC
Type: PLAINTEXT
Value: !Ref BinaryReleaseResultNotificationsTopic
- Name: AWS_ACCESS_KEY_ID
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsAccessKeyId
- Name: AWS_SECRET_ACCESS_KEY
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsSecretAccessKey
TimeoutInMinutes: 60
WindowsVS2015Project:
Type: AWS::CodeBuild::Project
Properties:
Name:
!Join
- '-'
- - !Ref WindowsProjectName
- vs2015
- !Ref BuildConfig
ServiceRole: !Ref BinaryReleaseCodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref WindowsBuildSpecLocation
Artifacts:
Type: CODEPIPELINE
Environment:
Type: WINDOWS_CONTAINER
ComputeType: BUILD_GENERAL1_LARGE
Image: !Ref WindowsVS2015ImageName
EnvironmentVariables:
- Name: ARCHITURE
Type: PLAINTEXT
Value: Windows2015
- Name: BUILD_CONFIG
Type: PLAINTEXT
Value: !Ref BuildConfig
- Name: BUILD_PARALLEL
Type: PLAINTEXT
Value: !Ref BuildParallel
- Name: NOTIFICATIONS_TOPIC
Type: PLAINTEXT
Value: !Ref BinaryReleaseResultNotificationsTopic
- Name: AWS_ACCESS_KEY_ID
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsAccessKeyId
- Name: AWS_SECRET_ACCESS_KEY
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsSecretAccessKey
TimeoutInMinutes: 90
WindowsVS2017Project:
Type: AWS::CodeBuild::Project
Properties:
Name:
!Join
- '-'
- - !Ref WindowsProjectName
- vs2017
- !Ref BuildConfig
ServiceRole: !Ref BinaryReleaseCodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref WindowsBuildSpecLocation
Artifacts:
Type: CODEPIPELINE
Environment:
Type: WINDOWS_CONTAINER
ComputeType: BUILD_GENERAL1_LARGE
Image: !Ref WindowsVS2017ImageName
EnvironmentVariables:
- Name: ARCHITURE
Type: PLAINTEXT
Value: Windows2017
- Name: BUILD_CONFIG
Type: PLAINTEXT
Value: !Ref BuildConfig
- Name: BUILD_PARALLEL
Type: PLAINTEXT
Value: !Ref BuildParallel
- Name: NOTIFICATIONS_TOPIC
Type: PLAINTEXT
Value: !Ref BinaryReleaseResultNotificationsTopic
- Name: AWS_ACCESS_KEY_ID
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsAccessKeyId
- Name: AWS_SECRET_ACCESS_KEY
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsSecretAccessKey
TimeoutInMinutes: 90
AndroidArm32Api19Project:
Type: AWS::CodeBuild::Project
Properties:
Name:
!Join
- '-'
- - !Ref AndroidProjectName
- arm32
- api19
- !Ref BuildConfig
ServiceRole: !Ref BinaryReleaseCodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref AndroidBuildSpecLocation
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_LARGE
Image: !Ref LinuxGccImageName
EnvironmentVariables:
- Name: ARCHITECTURE
Type: PLAINTEXT
Value: AndroidArm32
- Name: API_LEVEL
Type: PLAINTEXT
Value: 19
- Name: BUILD_CONFIG
Type: PLAINTEXT
Value: !Ref BuildConfig
- Name: BUILD_PARALLEL
Type: PLAINTEXT
Value: !Ref BuildParallel
- Name: NOTIFICATIONS_TOPIC
Type: PLAINTEXT
Value: !Ref BinaryReleaseResultNotificationsTopic
- Name: AWS_ACCESS_KEY_ID
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsAccessKeyId
- Name: AWS_SECRET_ACCESS_KEY
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsSecretAccessKey
TimeoutInMinutes: 60
AndroidArm32Api21Project:
Type: AWS::CodeBuild::Project
Properties:
Name:
!Join
- '-'
- - !Ref AndroidProjectName
- arm32
- api21
- !Ref BuildConfig
ServiceRole: !Ref BinaryReleaseCodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref AndroidBuildSpecLocation
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_LARGE
Image: !Ref LinuxGccImageName
EnvironmentVariables:
- Name: ARCHITECTURE
Type: PLAINTEXT
Value: AndroidArm32
- Name: API_LEVEL
Type: PLAINTEXT
Value: 21
- Name: BUILD_CONFIG
Type: PLAINTEXT
Value: !Ref BuildConfig
- Name: BUILD_PARALLEL
Type: PLAINTEXT
Value: !Ref BuildParallel
- Name: NOTIFICATIONS_TOPIC
Type: PLAINTEXT
Value: !Ref BinaryReleaseResultNotificationsTopic
- Name: AWS_ACCESS_KEY_ID
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsAccessKeyId
- Name: AWS_SECRET_ACCESS_KEY
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsSecretAccessKey
TimeoutInMinutes: 60
AndroidArm64Api21Project:
Type: AWS::CodeBuild::Project
Properties:
Name:
!Join
- '-'
- - !Ref AndroidProjectName
- arm64
- api21
- !Ref BuildConfig
ServiceRole: !Ref BinaryReleaseCodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec: !Ref AndroidBuildSpecLocation
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_LARGE
Image: !Ref LinuxGccImageName
EnvironmentVariables:
- Name: ARCHITECTURE
Type: PLAINTEXT
Value: AndroidArm64
- Name: API_LEVEL
Type: PLAINTEXT
Value: 21
- Name: BUILD_CONFIG
Type: PLAINTEXT
Value: !Ref BuildConfig
- Name: BUILD_PARALLEL
Type: PLAINTEXT
Value: !Ref BuildParallel
- Name: NOTIFICATIONS_TOPIC
Type: PLAINTEXT
Value: !Ref BinaryReleaseResultNotificationsTopic
- Name: AWS_ACCESS_KEY_ID
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsAccessKeyId
- Name: AWS_SECRET_ACCESS_KEY
Type: PARAMETER_STORE
Value: !Ref ParameterStoreAwsSecretAccessKey
TimeoutInMinutes: 60

View File

@@ -0,0 +1,12 @@
#!/bin/bash
rm -f ./not_a_release
aws s3 cp --quiet s3://aws-sdk-cpp-pipeline-sdks-team/not_a_release ./not_a_release
if [ -f ./not_a_release ]; then
aws s3 rm s3://aws-sdk-cpp-pipeline-sdks-team/not_a_release
exit 1
fi
exit 0

View File

@@ -0,0 +1,49 @@
version: 0.2
phases:
build:
commands:
- export SDK_ROOT=$CODEBUILD_SRC_DIR/aws-sdk-cpp
- cd $SDK_ROOT
# Testing the first approach to build custom client as a separate package, which means you have to build and install aws-sdk-cpp first.
# Generate custom client source code under custom-service/ with API description file located at code-generation/api-description/custom-service.
- python scripts/generate_sdks.py --pathToApiDefinitions=code-generation/api-descriptions/custom-service --outputLocation custom-service --serviceName custom-service --apiVersion 2017-11-03 --namespace Custom --prepareTool --standalone
# Build and install aws-cpp-sdk-core
- mkdir -p $SDK_ROOT/build/AWSSDK
- mkdir -p $SDK_ROOT/install
- cd $SDK_ROOT/build/AWSSDK
- cmake $SDK_ROOT -DBUILD_ONLY="core" -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX="$SDK_ROOT/install" -DBUILD_SHARED_LIBS=ON
- make -j 8
- make install
# Build custom-service
- mkdir -p $SDK_ROOT/build/custom-service
- cd $SDK_ROOT/build/custom-service
- cmake $SDK_ROOT/custom-service/aws-cpp-sdk-custom-service -DCMAKE_BUILD_TYPE=Debug -DCMAKE_PREFIX_PATH="$SDK_ROOT/install" -DAWSSDK_ROOT_DIR="$SDK_ROOT/install" -DBUILD_SHARED_LIBS=ON
- make -j 8
# Build and run custom-service integration tests
- mkdir -p $SDK_ROOT/build/custom-service-integration-tests
- cd $SDK_ROOT/build/custom-service-integration-tests
- cmake $SDK_ROOT/aws-cpp-sdk-custom-service-integration-tests -DCMAKE_BUILD_TYPE=Debug -DCMAKE_PREFIX_PATH="$SDK_ROOT/install;$SDK_ROOT/build/custom-service" -DAWSSDK_ROOT_DIR="$SDK_ROOT/install" -DBUILD_SHARED_LIBS=ON -DSTANDALONE=ON
- make -j 8
- $SDK_ROOT/build/custom-service-integration-tests/aws-cpp-sdk-custom-service-integration-tests
# Testing the second approach to build custom client along with AWS C++ SDK, which means we will build everything altogether at the same time.
# Copy the c2j model to code-generation/api-descriptions
- cp $SDK_ROOT/code-generation/api-descriptions/custom-service/custom-service-2017-11-03.normal.json $SDK_ROOT/code-generation/api-descriptions/petstore-2017-11-03.normal.json
# Build and install aws-cpp-sdk-core and aws-cpp-sdk-petstore
- mkdir -p $SDK_ROOT/build_all
- mkdir -p $SDK_ROOT/install_all
- cd $SDK_ROOT/build_all
- cmake $SDK_ROOT -DBUILD_ONLY=core -DADD_CUSTOM_CLIENTS="serviceName=petstore, version=2017-11-03" -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=$SDK_ROOT/install_all -DBUILD_SHARED_LIBS=ON
- make -j 8
- make install
# Build and run petstore integration tests
- mkdir -p $SDK_ROOT/build_tests
- cd $SDK_ROOT/build_tests
- cmake $SDK_ROOT/aws-cpp-sdk-custom-service-integration-tests -DCMAKE_BUILD_TYPE=Debug -DCMAKE_PREFIX_PATH="$SDK_ROOT/install_all" -DAWSSDK_ROOT_DIR="$SDK_ROOT/install_all" -DBUILD_SHARED_LIBS=ON -DSTANDALONE=OFF
- make -j 8
- $SDK_ROOT/build_tests/aws-cpp-sdk-custom-service-integration-tests

View File

@@ -0,0 +1,50 @@
version: 0.2
phases:
build:
commands:
- $SDK_ROOT="$Env:CODEBUILD_SRC_DIR/aws-sdk-cpp"
- cd $SDK_ROOT
# Testing the first approach to build custom client as a separate package, which means you have to build and install aws-sdk-cpp first.
# Generate custom client source code under custom-service/ with API description file located at code-generation/api-description/custom-service.
- python scripts/generate_sdks.py --pathToApiDefinitions=code-generation/api-descriptions/custom-service --outputLocation custom-service --serviceName custom-service --apiVersion 2017-11-03 --namespace Custom --prepareTool --standalone
# Build and install aws-cpp-sdk-core
- mkdir -p $SDK_ROOT/build/AWSSDK
- mkdir -p $SDK_ROOT/install
- cd $SDK_ROOT/build/AWSSDK
- cmake $SDK_ROOT -DBUILD_ONLY="core" -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX="$SDK_ROOT/install" -DBUILD_SHARED_LIBS=ON
- MSBuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- MSBuild.exe INSTALL.vcxproj -p:Configuration=Debug
# Build custom-service
- mkdir -p $SDK_ROOT/build/custom-service
- cd $SDK_ROOT/build/custom-service
- cmake $SDK_ROOT/custom-service/aws-cpp-sdk-custom-service -DCMAKE_PREFIX_PATH="$SDK_ROOT/install" -DAWSSDK_ROOT_DIR="$SDK_ROOT/install" -DCMAKE_INSTALL_PREFIX="$SDK_ROOT/install" -DBUILD_SHARED_LIBS=ON -DUSE_WINDOWS_DLL_SEMANTICS=ON
- MSBuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- MSBuild.exe INSTALL.vcxproj -p:Configuration=Debug
# Build and run custom-service integration tests
- mkdir -p $SDK_ROOT/build/custom-service-integration-tests
- cd $SDK_ROOT/build/custom-service-integration-tests
- cmake $SDK_ROOT/aws-cpp-sdk-custom-service-integration-tests -DCMAKE_PREFIX_PATH="$SDK_ROOT/install" -DAWSSDK_ROOT_DIR="$SDK_ROOT/install" -DBUILD_SHARED_LIBS=ON -DSTANDALONE=ON
- MSBuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- ./Debug/aws-cpp-sdk-custom-service-integration-tests
# Testing the second approach to build custom client along with AWS C++ SDK, which means we will build everything altogether at the same time.
# Copy the c2j model to code-generation/api-descriptions
- cp $SDK_ROOT/code-generation/api-descriptions/custom-service/custom-service-2017-11-03.normal.json $SDK_ROOT/code-generation/api-descriptions/petstore-2017-11-03.normal.json
# Build and install aws-cpp-sdk-core and aws-cpp-sdk-petstore
- mkdir -p $SDK_ROOT/build_all
- mkdir -p $SDK_ROOT/install_all
- cd $SDK_ROOT/build_all
- cmake $SDK_ROOT -DBUILD_ONLY=core -DADD_CUSTOM_CLIENTS="serviceName=petstore, version=2017-11-03" -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX="$SDK_ROOT/install_all" -DBUILD_SHARED_LIBS=ON
- MSBuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- MSBuild.exe INSTALL.vcxproj -p:Configuration=Debug
# Build and run petstore integration tests
- mkdir -p $SDK_ROOT/build_tests
- cd $SDK_ROOT/build_tests
- cmake $SDK_ROOT/aws-cpp-sdk-custom-service-integration-tests -DCMAKE_PREFIX_PATH="$SDK_ROOT/install_all" -DAWSSDK_ROOT_DIR="$SDK_ROOT/install_all" -DBUILD_SHARED_LIBS=ON -DSTANDALONE=OFF
- MSBuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- ./Debug/aws-cpp-sdk-custom-service-integration-tests.exe

View File

@@ -0,0 +1,21 @@
# Using Amazon Linux 2 docker image
FROM amazonlinux:2
#Install g++
RUN yum groupinstall "Development Tools" -y
#Install cmake
RUN curl https://cmake.org/files/v3.13/cmake-3.13.3-Linux-x86_64.tar.gz --output cmake-3.13.3-Linux-x86_64.tar.gz && \
tar -xvzf cmake-3.13.3-Linux-x86_64.tar.gz && \
mv cmake-3.13.3-Linux-x86_64 /opt && \
rm cmake-3.13.3-Linux-x86_64.tar.gz && \
ln -s /opt/cmake-3.13.3-Linux-x86_64/bin/cmake /usr/local/bin/cmake
#Install curl and openssl
RUN yum install curl-devel -y && \
yum install openssl-devel -y && \
yum install ninja-build -y
#Install awscli
RUN yum install python-pip -y && \
pip install awscli

View File

@@ -0,0 +1,28 @@
# Using official gcc docker image
FROM gcc:7.4
# Install zip, cmake, maven, python-pip via apt
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y zip cmake python-pip
# Install awscli
RUN pip install awscli --upgrade
# We can install openjdk by "apt install openjdk-8-jdk", but it has some issues during building code-generation, we have to install it manually.
RUN wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-linux-x64.tar.gz && \
tar zxvf jdk-8u191-linux-x64.tar.gz && \
mkdir /usr/bin/java && \
mv jdk1.8.0_191 /usr/bin/java && \
rm jdk-8u191-linux-x64.tar.gz && \
ln -s /usr/bin/java/jdk1.8.0_191/bin/java /bin/java && \
ln -s /usr/bin/java/jdk1.8.0_191/bin/javac /bin/javac
ENV JAVA_HOME /usr/bin/java/jdk1.8.0_191
RUN apt-get install -y maven
# Download and install Android NDK
RUN wget https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip && \
unzip android-ndk-r19c-linux-x86_64.zip && \
mv android-ndk-r19c /opt && \
rm android-ndk-r19c-linux-x86_64.zip
ENV ANDROID_NDK /opt/android-ndk-r19c

View File

@@ -0,0 +1,23 @@
# Using official ubuntu docker image
FROM ubuntu:18.04
# Install git, zip, python-pip, cmake, g++, zlib, libssl, libcurl, java, maven via apt
RUN apt update && \
apt upgrade -y && \
apt install -y git zip wget python-pip python3 python3-pip cmake g++ zlib1g-dev libssl-dev libcurl4-openssl-dev openjdk-8-jdk doxygen ninja-build
# Install maven
RUN apt install -y maven
# Install awscli
RUN pip install awscli --upgrade
# Install boto3
RUN pip3 install boto3 --upgrade
# Download and install Android NDK
RUN wget https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip && \
unzip android-ndk-r19c-linux-x86_64.zip && \
mv android-ndk-r19c /opt && \
rm android-ndk-r19c-linux-x86_64.zip
ENV ANDROID_NDK /opt/android-ndk-r19c

View File

@@ -0,0 +1,45 @@
# escape=`
FROM microsoft/windowsservercore:ltsc2016
ADD https://download.microsoft.com/download/6/A/A/6AA4EDFF-645B-48C5-81CC-ED5963AEAD48/vc_redist.x64.exe /vc_redist.x64.exe
RUN start /wait C:\vc_redist.x64.exe /quiet /norestart
# Install chocolatey
RUN @powershell -NoProfile -ExecutionPolicy unrestricted -Command "$env:chocolateyUseWindowsCompression = 'true'; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12; (iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))) >$null 2>&1"
RUN choco install git 7zip -y
RUN choco install cmake --installargs 'ADD_CMAKE_TO_PATH=""System""' -y
# Install Visual C++ Build Tools, as per: https://chocolatey.org/packages/visualcpp-build-tools
RUN choco install visualcpp-build-tools -version 14.0.25420.1 -y
RUN setx /M PATH "C:\Program Files (x86)\Windows Kits\10\bin\x86\ucrt;C:\Program Files (x86)\Windows Kits\10\bin\x64\ucrt;%PATH%"
# Add msbuild to PATH
RUN setx /M PATH "%PATH%;C:\Program Files (x86)\MSBuild\14.0\bin"
# Test msbuild can be accessed without path
RUN msbuild -version
# Install Java
RUN choco install jdk8 -y
# Add Java to PATH
RUN setx /M PATH "%PATH%;C:\Program Files\Java\jdk_1.8.0_172\bin"
# Install Maven
RUN choco install maven -y
# Install Python3
RUN choco install python -y
# Add Python to PATH
RUN setx /M PATH "%PATH%;C:\Python36"
# Install boto3
RUN pip install boto3 --upgrade
# Install awscli
RUN pip install awscli --upgrade
CMD [ "cmd.exe" ]

View File

@@ -0,0 +1,47 @@
# escape=`
FROM microsoft/windowsservercore:ltsc2016
ADD https://download.microsoft.com/download/6/A/A/6AA4EDFF-645B-48C5-81CC-ED5963AEAD48/vc_redist.x64.exe /vc_redist.x64.exe
RUN start /wait C:\vc_redist.x64.exe /quiet /norestart
# Install chocolatey
RUN @powershell -NoProfile -ExecutionPolicy unrestricted -Command "$env:chocolateyUseWindowsCompression = 'true'; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12; (iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))) >$null 2>&1"
RUN choco install git 7zip -y
RUN choco install cmake --installargs 'ADD_CMAKE_TO_PATH=""System""' -y
# Install Visual C++ Build Tools, as per: https://chocolatey.org/packages/visualcpp-build-tools
RUN powershell -NoProfile -InputFormat None -Command `
choco install visualcpp-build-tools -version 15.0.26228.20170424 -y; `
Write-Host 'Waiting for Visual C++ Build Tools to finish'; `
Wait-Process -Name vs_installer
# Add msbuild to PATH
RUN setx /M PATH "%PATH%;C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin"
# Test msbuild can be accessed without path
RUN msbuild -version
# Install Java
RUN choco install jdk8 -y
# Add Java to PATH
RUN setx /M PATH "%PATH%;C:\Program Files\Java\jdk_1.8.0_172\bin"
# Install Maven
RUN choco install maven -y
# Install Python3
RUN choco install python -y
# Add Python to PATH
RUN setx /M PATH "%PATH%;C:\Python36"
# Install boto3
RUN pip install boto3 --upgrade
# Install awscli
RUN pip install awscli --upgrade
CMD [ "cmd.exe" ]

View File

@@ -0,0 +1,57 @@
from __future__ import print_function
import json
import zipfile
import boto3
from botocore.exceptions import ClientError
print('Loading function')
bucket_name = 'aws-sdk-cpp-pipeline-sdks-team'
key = 'pending-releases.zip'
temp_archive_file = '/tmp/pending_releases.zip'
artifact = 'pending_releases'
temp_artifact_file = '/tmp/pending_releases'
s3 = boto3.client('s3')
def lambda_handler(event, context):
message = event['Records'][0]['Sns']['Message']
print("From SNS: " + message)
releasesDoc = {}
releasesDoc['releases'] = []
pendingReleases = None
try:
pendingReleases = s3.get_object(Bucket=bucket_name, Key=key)
body_stream_to_file(pendingReleases["Body"].read())
releasesDoc = read_zipped_release_doc()
except ClientError as e:
print("Couldn't pull doc, assuming it is empty. exception " + e.message)
releasesDoc['releases'].append(json.loads(message)["release"])
write_zipped_release_doc(releasesDoc)
with open(temp_archive_file) as archive:
s3.put_object(Bucket=bucket_name, Key=key, Body=archive.read())
return message
def read_zipped_release_doc():
archive = zipfile.ZipFile(temp_archive_file, 'r')
with archive.open(artifact) as artifactFile:
return json.loads(artifactFile.read())
def write_zipped_release_doc(doc):
releasesDocStr = json.dumps(doc)
print("New Release Doc: " + releasesDocStr)
with open(temp_artifact_file, "w") as artifactFile:
artifactFile.write(releasesDocStr)
with zipfile.ZipFile(temp_archive_file, 'w') as archiveStream:
archiveStream.write(temp_artifact_file, artifact)
def body_stream_to_file(body):
with open(temp_archive_file, 'w') as archiveFile:
archiveFile.write(body)

View File

@@ -0,0 +1,7 @@
cmake_minimum_required(VERSION 3.3)
set(CMAKE_CXX_STANDARD 11)
project(app LANGUAGES CXX)
find_package(AWSSDK REQUIRED COMPONENTS s3)
add_executable(${PROJECT_NAME} "main.cpp")
target_link_libraries(${PROJECT_NAME} ${AWSSDK_LINK_LIBRARIES})
target_compile_options(${PROJECT_NAME} PRIVATE "-Wall" "-Werror")

View File

@@ -0,0 +1,29 @@
#include <aws/core/Aws.h>
#include <aws/core/utils/logging/LogLevel.h>
#include <aws/s3/S3Client.h>
#include <iostream>
using namespace Aws;
int main(int argc, char *argv[])
{
SDKOptions options;
options.loggingOptions.logLevel = Utils::Logging::LogLevel::Warn;
InitAPI(options);
{
S3::S3Client client;
auto outcome = client.ListBuckets();
if (outcome.IsSuccess()) {
std::cout << "Found " << outcome.GetResult().GetBuckets().size() << " buckets\n";
for (auto&& b : outcome.GetResult().GetBuckets()) {
std::cout << b.GetName() << std::endl;
}
} else {
std::cout << "Failed with error: " << outcome.GetError() << std::endl;
}
}
ShutdownAPI(options);
return 0;
}

View File

@@ -0,0 +1,89 @@
from __future__ import print_function
import json
import zipfile
import boto3
import os
import re
import sys
import argparse
from botocore.exceptions import ClientError
import requests
import requests.packages.urllib3
requests.packages.urllib3.disable_warnings()
temp_archive_file = 'models.zip'
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-r', '--releaseDoc')
parser.add_argument('-m', '--modelsDir')
args = parser.parse_args()
releaseDocPath = args.releaseDoc
modelsDir = args.modelsDir
print('Release Doc path {0}'.format(releaseDocPath))
print('Models Directory {0}'.format(modelsDir))
releaseDoc = {}
pendingReleases = None
with open(releaseDocPath, "r") as releaseDocFileStream:
releaseDoc = json.loads(releaseDocFileStream.read())
if(len(releaseDoc) == 0 or len(releaseDoc["releases"]) == 0):
return
for release in releaseDoc["releases"]:
for feature in release["features"]:
if feature["c2jModels"] != None:
response = requests.get(feature["c2jModels"])
if response.status_code != 200:
print("Error downloading {0} artifacts skipping.", json.dumps(feature))
continue
body_stream_to_file(response.content)
copy_model_files(modelsDir)
cat_release_notes(feature["releaseNotes"], modelsDir)
cat_pending_releases(release["id"], modelsDir)
emptyReleaseDoc = "{ \"releases\": []}"
with open(releaseDocPath, "w") as emptyReleaseFile:
emptyReleaseFile.write(emptyReleaseDoc)
def copy_model_files(models_dir):
archive = zipfile.ZipFile(temp_archive_file, 'r')
archive.debug = 3
for info in archive.infolist():
print(info.filename)
if re.match(r'output/.*\.normal\.json', info.filename):
outputPath = os.path.join(models_dir, os.path.basename(info.filename))
print("copying {0} to {1}".format(info.filename, outputPath))
fileHandle = archive.open(info.filename, 'r')
fileOutput = fileHandle.read()
with open(outputPath, 'wb') as destination:
destination.write(fileOutput)
fileHandle.close()
def body_stream_to_file(body):
with open(temp_archive_file, 'w') as archiveFile:
archiveFile.write(body)
def cat_release_notes(releaseNotes, models_path):
with open(os.path.join(models_path, "release_notes"), "a") as releaseNotesFile:
releaseNotesFile.write(releaseNotes + "\n\n")
def cat_pending_releases(release_guid, models_path):
with open(os.path.join(models_path, "pending_releases"), "a") as pendingReleasesFile:
pendingReleasesFile.write(release_guid + "\n")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,8 @@
#!/bin/bash
IFS=$'\n' read -d '' -r -a releases < $1/pending_releases
for i in "${releases[@]}"
do
aws sqs send-message --debug --message-group-id "needlessField" --queue-url "$4" --message-body "{ \"releaseId\": \"$i\", \"language\": \"CPP\", \"releaseState\":\"$2\", \"statusMessage\":\"$3\" }" --region us-west-2
done

View File

@@ -0,0 +1,19 @@
#!/bin/bash -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
if [ -f ./models/pending_releases ] && [ -s ./models/pending_releases ];
then
aws s3 cp s3://aws-sdk-cpp-pipeline-sdks-team/modelsSnapshot.zip modelsLatest.zip --region us-east-1
unzip modelsLatest.zip -d modelsLatest
rm modelsLatest.zip
grep -vf ./models/pending_releases ./modelsLatest/models/pending_releases | xargs | tee ./modelsLatest/models/pending_releases
grep -vf ./models/release_notes ./modelsLatest/models/release_notes | xargs | tee ./modelsLatest/models/release_notes
touch ./not_a_release
aws s3 cp not_a_release s3://aws-sdk-cpp-pipeline-sdks-team/not_a_release --region us-east-1
rm -rf ./models
mkdir ./models
cp -r ./modelsLatest/models/* ./models
zip -r modelsSnapshot.zip ./models
aws s3 cp modelsSnapshot.zip s3://aws-sdk-cpp-pipeline-sdks-team/modelsSnapshot.zip --region us-east-1
fi

View File

@@ -0,0 +1,13 @@
#!/bin/bash
set -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
aws s3 cp s3://aws-sdk-cpp-pipeline-sdks-team/modelsSnapshot.zip models.zip --region us-east-1
unzip models.zip
rm models.zip
cp $1 $1_cpy
python $DIR/move_release_doc_to_models.py --modelsDir="./models" --releaseDoc="$1"
rm models.zip
zip -r modelsSnapshot.zip ./models
aws s3 cp modelsSnapshot.zip s3://aws-sdk-cpp-pipeline-sdks-team/modelsSnapshot.zip --region us-east-1
zip -r pending-releases.zip -j $1
aws s3 cp pending-releases.zip s3://aws-sdk-cpp-pipeline-sdks-team/pending-releases.zip --region us-east-1

View File

@@ -0,0 +1,45 @@
import os
import json
import boto3
import argparse
UPDATE_STATUS_LAMBDA_FUNCTION_NAME = os.environ['UPDATE_STATUS_LAMBDA_FUNCTION_NAME']
lambdaClient = boto3.client('lambda', region_name = os.environ['AWS_REGION'])
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-s', '--stage_name', default = 'Unknown')
parser.add_argument('-e', '--internal_message', default = '')
parser.add_argument('-i', '--release_id', default = '')
parser.add_argument('-m', '--status_message', default = '')
parser.add_argument('-b', '--build_succeeding', default = '0')
parser.add_argument('-o', '--internal_only', action = 'store_true')
parser.add_argument('-c', '--release_complete', action = 'store_true')
args = parser.parse_args()
updateStatus({
'stageName': args.stage_name,
'internalMessage': args.internal_message,
'internalOnly': args.internal_only,
'messageToTrebuchet': {
'releaseId' : args.release_id,
'language' : 'CPP',
'releaseState' : 'Success' if args.release_complete else ('InProgress' if args.build_succeeding == '1' else 'Blocked'),
'statusMessage' : args.status_message
}
})
def updateStatus(updateStatusMessage):
print('[Lambda] Triggering Lambda function to update status:', end = ' ')
print(updateStatusMessage)
response = lambdaClient.invoke(
FunctionName = UPDATE_STATUS_LAMBDA_FUNCTION_NAME,
InvocationType = 'RequestResponse',
Payload = json.dumps(updateStatusMessage)
)
print('Response:', end = ' ')
print(response)
if response['ResponseMetadata']['HTTPStatusCode'] != 200:
quit(1)
main()

View File

@@ -0,0 +1,26 @@
version: 0.2
phases:
build:
commands:
- echo $CODEBUILD_SOURCE_VERSION
- export RELEASE_ID=$(cat $RELEASE_ID_FILENAME)
- python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s Build -i "$RELEASE_ID" -m "Step 2 of 4. Verifying Build." -b $CODEBUILD_BUILD_SUCCEEDING
- mv * /tmp && mkdir -p /tmp/build
- cd /tmp/aws-sdk-cpp
- python ./scripts/endpoints_checker.py
- cd ../build
- cmake ../aws-sdk-cpp -DCMAKE_BUILD_TYPE=Debug -DENABLE_ADDRESS_SANITIZER=ON -DMINIMIZE_SIZE=ON
- make -j 3
post_build:
commands:
- cd /tmp
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://${AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "0" ]; then
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s Build -e "[BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 2 of 4. Verification of Build Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
fi
artifacts:
files:
- "**/*"
base-directory: /tmp

View File

@@ -0,0 +1,22 @@
version: 0.2
phases:
pre_build:
commands:
- export RELEASE_ID=$(cat $RELEASE_ID_FILENAME)
- python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s IntegrationTests -i "$RELEASE_ID" -m "Step 3 of 4. Running Integration Tests." -b $CODEBUILD_BUILD_SUCCEEDING
build:
commands:
- echo $CODEBUILD_SOURCE_VERSION
- mv aws-sdk-cpp build /tmp
- cd /tmp/build
- python ../aws-sdk-cpp/scripts/run_integration_tests.py --testDir .
post_build:
commands:
- cd /tmp
- aws s3 cp ./build s3://${S3_BUCKET_NAME}/log/${CODEBUILD_BUILD_ID}/ --recursive --exclude "*" --include "aws*.log"
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://${AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "0" ]; then
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s IntegrationTests -e "[BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 3 of 4. Integration Tests Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
fi

View File

@@ -0,0 +1,29 @@
version: 0.2
phases:
build:
commands:
- echo $CODEBUILD_SOURCE_VERSION
- rm -rf aws-sdk-cpp
- git clone https://github.com/${GITHUB_PUBLIC_REPOSITORY}.git
- cd aws-sdk-cpp
- export VERSION_NUM=$(grep AWS_SDK_VERSION_STRING ./aws-cpp-sdk-core/include/aws/core/VersionConfig.h | cut -d '"' -f2)
- sed -i "s/PROJECT_NUMBER .*/PROJECT_NUMBER = $VERSION_NUM/" ./doxygen/doxygen.config
- doxygen ./doxygen/doxygen.config
- python doc_crosslinks/generate_cross_link_data.py --apiDefinitionsPath ./code-generation/api-descriptions/ --templatePath ./doc_crosslinks/crosslink_redirect.html --outputPath ./crosslink_redirect.html
- aws s3 cp ./doxygen/html s3://${DOCS_S3_BUCKET_NAME}/cpp/api/$VERSION_NUM --recursive
- aws s3 cp s3://${DOCS_S3_BUCKET_NAME}/cpp/api/$VERSION_NUM s3://${DOCS_S3_BUCKET_NAME}/cpp/api/LATEST --recursive
- aws s3 cp ./crosslink_redirect.html s3://${DOCS_S3_BUCKET_NAME}/cpp/api/crosslink_redirect.html
- mkdir aws_sdk_cpp
- cp -r ./doxygen/html aws_sdk_cpp
- cp -r ./crosslink_redirect.html aws_sdk_cpp
- zip -r documentation.zip ./aws_sdk_cpp
- aws s3 cp documentation.zip s3://${BINARY_S3_BUCKET_NAME}/cpp/builds/$VERSION_NUM/documentation.zip
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://${AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "0" ]; then
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s PublishAPIDocs -e "[BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Publish API Docs Failed." -b $CODEBUILD_BUILD_SUCCEEDING -o;
fi

View File

@@ -0,0 +1,71 @@
version: 0.2
phases:
build:
commands:
- echo $CODEBUILD_SOURCE_VERSION
- export RELEASE_ID=$(cat $RELEASE_ID_FILENAME)
- if [ -s $RELEASE_NOTES_FILENAME ]; then export COMMIT_MSG="$(cat $RELEASE_NOTES_FILENAME)"; fi;
- python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s PushToGithub -i "$RELEASE_ID" -m "Step 4 of 4. Pushing Code to Public Github." -b $CODEBUILD_BUILD_SUCCEEDING
- cd aws-sdk-cpp
# Verify the candidate commit, in case there is new merge without testing during release.
- if [ "$(git rev-parse --abbrev-ref HEAD)" != "master" ]; then exit 1; fi;
- git fetch --all
- if [ -n "$(git diff master origin/master)" ]; then exit 1; fi;
# Get highest tag number
- export VERSION=$(git describe --abbrev=0 --tags)
# Calculate new version
- export VERSION_MAJOR=$(echo $VERSION | cut -d '.' -f1)
- export VERSION_MINOR=$(echo $VERSION | cut -d '.' -f2)
- export VERSION_PATCH=$(echo $VERSION | cut -d '.' -f3)
- export VERSION_PATCH=$((VERSION_PATCH+1))
- export VERSION_BUMP=$VERSION_MAJOR.$VERSION_MINOR.$VERSION_PATCH
- echo "Updating $VERSION to $VERSION_BUMP"
# Write new version to VersionConfig.h
- sed -i "s/AWS_SDK_VERSION_STRING.*/AWS_SDK_VERSION_STRING \"$VERSION_BUMP\"/" aws-cpp-sdk-core/include/aws/core/VersionConfig.h
# git add
- git add --all
- git status
# Generate release notes
- if [ -z "$COMMIT_MSG" ]; then export COMMIT_MSG="Auto commit from CI."; fi;
# Commit to release candidate branch
- git config --global user.name "$GIT_COMMIT_AUTHOR_NAME"
- git config --global user.email "$GIT_COMMIT_AUTHOR_EMAIL"
- git commit -m "$COMMIT_MSG"
- git checkout release-candidate
- git merge master
- git push origin release-candidate
# Get current hash and see if it already has a tag
- export GIT_COMMIT=$(git rev-parse HEAD)
- export NEEDS_TAG=$(git describe --contains $GIT_COMMIT)
# Only tag if no tag already (would be better if the git describe command above could have a silent option)
- |
if [ -z "$NEEDS_TAG" ]; then
echo "Tagged with $VERSION_BUMP (Ignoring fatal:cannot describe - this means commit is untagged) "
git tag $VERSION_BUMP
git push --tags
else
echo "Already a tag on this commit"
fi
# Push code to Github
# - git fetch --tags
# - git fetch --all
# - git reset --hard HEAD
# - git checkout release-candidate
# - git pull
- git checkout master
- git pull
- git merge release-candidate
- git push https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GITHUB_PRIVATE_REPOSITORY}.git master
- git push https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GITHUB_PUBLIC_REPOSITORY}.git master
- git push --tags https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GITHUB_PUBLIC_REPOSITORY}.git
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://${AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "1" ]; then
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s PushToGithub -e "[BUILD SUCCESS](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 4 of 4. Code Pushed to Public Github." -b $CODEBUILD_BUILD_SUCCEEDING -c;
else
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s PushToGithub -e "[BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 4 of 4. Push to Public Github Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
fi

View File

@@ -0,0 +1,30 @@
# This buildspec is source controlled, whenever you make any change in the AWS console, you should update it to Github.
version: 0.2
phases:
build:
commands:
- echo $CODEBUILD_SOURCE_VERSION
- git clone https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GITHUB_PRIVATE_REPOSITORY}.git aws-sdk-cpp
- export RELEASE_ID=$(cat $RELEASE_ID_FILENAME)
- python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s RegenerateCode -i "$RELEASE_ID" -m "Step 1 of 4. Regenerating Code with New Models." -b $CODEBUILD_BUILD_SUCCEEDING
- cp models/*.normal.json aws-sdk-cpp/code-generation/api-descriptions/
- cd aws-sdk-cpp
- mkdir build
- cd build
- cmake .. -DREGENERATE_CLIENTS=ON
- cd ..
- rm -rf build
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- export BUILD_JOB_NAME=$(echo "${CODEBUILD_BUILD_ID}" | cut -f1 -d ":")
- export BUILD_URL="https://${AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/${BUILD_JOB_NAME}/build/${CODEBUILD_BUILD_ID}"
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" = "1" ]; then
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s RegenerateCode -e "[BUILD SUCCESS](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 1 of 4. Regenerated Code with New Models." -b $CODEBUILD_BUILD_SUCCEEDING;
else
python3 aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s RegenerateCode -e "[BUILD FAILURE](${BUILD_URL}) (${CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 1 of 4. Code Generation with New Models Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
fi
artifacts:
files:
- "**/*"

View File

@@ -0,0 +1,28 @@
version: 0.2
phases:
build:
commands:
- echo ${Env:CODEBUILD_SOURCE_VERSION}
- $RELEASE_ID=$(cat ${Env:RELEASE_ID_FILENAME})
- mkdir C:\tmp
- mkdir C:\tmp\build
- mv * C:\tmp
- cd C:\tmp\build
- cmake.exe -G "Visual Studio 14 2015 Win64" -DCMAKE_BUILD_TYPE=Debug -DMINIMIZE_SIZE=ON ../aws-sdk-cpp
- msbuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- cd ..
- Get-ChildItem aws-sdk-cpp -Exclude *tests | Where-Object Name -Like 'aws-cpp-sdk-*' | Remove-Item -Recurse -Force
- Get-ChildItem build -Exclude bin | Remove-Item -Recurse -Force
post_build:
commands:
- cd C:\tmp
- $BUILD_JOB_NAME=${Env:CODEBUILD_BUILD_ID}.Substring(0, ${Env:CODEBUILD_BUILD_ID}.IndexOf(":"))
- $BUILD_URL="https://${Env:AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/$BUILD_JOB_NAME/build/${Env:CODEBUILD_BUILD_ID}"
- |
if (${Env:CODEBUILD_BUILD_SUCCEEDING} -eq 0) {
python aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s Build -e "[BUILD FAILURE](${BUILD_URL}) (${Env:CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 2 of 4. Verification of Build Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
}
artifacts:
files:
- "**/*"
base-directory: C:\tmp

View File

@@ -0,0 +1,28 @@
version: 0.2
phases:
build:
commands:
- echo ${Env:CODEBUILD_SOURCE_VERSION}
- $RELEASE_ID=$(cat ${Env:RELEASE_ID_FILENAME})
- mkdir C:\tmp
- mkdir C:\tmp\build
- mv * C:\tmp
- cd C:\tmp\build
- cmake.exe -G "Visual Studio 15 2017 Win64" -DCMAKE_BUILD_TYPE=Debug -DMINIMIZE_SIZE=ON ../aws-sdk-cpp
- msbuild.exe ALL_BUILD.vcxproj -p:Configuration=Debug -m
- cd ..
- Get-ChildItem aws-sdk-cpp -Exclude *tests | Where-Object Name -Like 'aws-cpp-sdk-*' | Remove-Item -Recurse -Force
- Get-ChildItem build -Exclude bin | Remove-Item -Recurse -Force
post_build:
commands:
- cd C:\tmp
- $BUILD_JOB_NAME=${Env:CODEBUILD_BUILD_ID}.Substring(0, ${Env:CODEBUILD_BUILD_ID}.IndexOf(":"))
- $BUILD_URL="https://${Env:AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/$BUILD_JOB_NAME/build/${Env:CODEBUILD_BUILD_ID}"
- |
if (${Env:CODEBUILD_BUILD_SUCCEEDING} -eq 0) {
python aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s Build -e "[BUILD FAILURE](${BUILD_URL}) (${Env:CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 2 of 4. Verification of Build Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
}
artifacts:
files:
- "**/*"
base-directory: C:\tmp

View File

@@ -0,0 +1,21 @@
version: 0.2
phases:
build:
commands:
- echo ${Env:CODEBUILD_SOURCE_VERSION}
- $RELEASE_ID=$(cat ${Env:RELEASE_ID_FILENAME})
- mkdir C:\tmp
- mv aws-sdk-cpp C:\tmp
- mv build C:\tmp
- cd C:\tmp\build
- python ../aws-sdk-cpp/scripts/run_integration_tests.py --testDir ./bin/Debug
post_build:
commands:
- cd C:\tmp
- aws s3 cp ./build s3://${Env:S3_BUCKET_NAME}/log/${Env:CODEBUILD_BUILD_ID}/ --recursive --exclude "*" --include "aws*.log"
- $BUILD_JOB_NAME=${Env:CODEBUILD_BUILD_ID}.Substring(0, ${Env:CODEBUILD_BUILD_ID}.IndexOf(":"))
- $BUILD_URL="https://${Env:AWS_REGION}.console.aws.amazon.com/codesuite/codebuild/projects/$BUILD_JOB_NAME/build/${Env:CODEBUILD_BUILD_ID}"
- |
if (${Env:CODEBUILD_BUILD_SUCCEEDING} -eq 0) {
python aws-sdk-cpp/CI/trebuchet-release-pipeline/UpdateStatus.py -s IntegrationTests -e "[BUILD FAILURE](${BUILD_URL}) (${Env:CODEBUILD_BUILD_ID})" -i $RELEASE_ID -m "Step 3 of 4. Integration Tests Failed. A technician has already been notified." -b $CODEBUILD_BUILD_SUCCEEDING;
}

View File

@@ -0,0 +1,127 @@
# Whenever you make any change here, you should update it in Amazon S3.
# This function serves as glue between SNS and S3.
# 1- Receives SNS message when Trebuchet release starts
# 2- Extracts the message (which should be JSON)
# 3- Writes the JSON to a file on disk
# 4- Downloads models with the presigned URL
# 5- Writes release notes to a file
# 6- Writes release id to a file
# 7- Upload all these files as a zip file to S3
import os
import shutil
import re
import json
import zipfile
import traceback
import boto3
from botocore.vendored import requests
S3_BUCKET_NAME = os.environ['S3_BUCKET_NAME']
RELEASE_MESSAGE_FILENAME = os.environ['RELEASE_MESSAGE_FILENAME']
RELEASE_ID_FILENAME = os.environ['RELEASE_ID_FILENAME']
RELEASE_NOTES_FILENAME = os.environ['RELEASE_NOTES_FILENAME']
PIPELINE_SOURCE = os.environ['PIPELINE_SOURCE']
UPDATE_STATUS_LAMBDA_FUNCTION_NAME = os.environ['UPDATE_STATUS_LAMBDA_FUNCTION_NAME']
OUTPUT_PATH = os.path.join('/tmp', 'output')
MODELS_OUTPUT_PATH = os.path.join(OUTPUT_PATH, 'models')
s3Resource = boto3.resource('s3', region_name = os.environ['AWS_REGION'])
lambdaClient = boto3.client('lambda', region_name = os.environ['AWS_REGION'])
updateStatusMessage = {
'stageName': 'HandleTrebuchetReleaseNotification',
'internalMessage': '',
'internalOnly': False,
'messageToTrebuchet': {
'releaseId' : '',
'language' : 'CPP',
'releaseState' : '',
'statusMessage' : ''
}
}
def lambda_handler(event, context):
try:
releaseMessage = json.loads(event['Records'][0]['Sns']['Message'])
# For local testing:
# with open(RELEASE_MESSAGE_FILENAME, 'r') as releaseMessageFile:
# releaseMessage = json.loads(releaseMessageFile.read())
print('[SNS] Receiving message from Trebuchet:', end = ' ')
print(releaseMessage)
if os.path.isdir(OUTPUT_PATH):
shutil.rmtree(OUTPUT_PATH)
os.mkdir(OUTPUT_PATH)
os.mkdir(MODELS_OUTPUT_PATH)
with open(os.path.join(OUTPUT_PATH, RELEASE_MESSAGE_FILENAME), 'w') as releaseMessageFile:
releaseMessageFile.write(json.dumps(releaseMessage))
releaseMessageFile.close()
with open(os.path.join(OUTPUT_PATH, RELEASE_ID_FILENAME), 'w') as releaseIdFile:
releaseIdFile.write(releaseMessage['release']['id'])
with open(os.path.join(OUTPUT_PATH, RELEASE_NOTES_FILENAME), 'w') as releaseNotesFile:
releaseNotesFile.write('')
updateStatusMessage['messageToTrebuchet'] = {
'releaseId' : releaseMessage['release']['id'],
'language' : 'CPP',
'releaseState' : 'InProgress',
'statusMessage' : 'Step 0 of 4. Handling release notification from Trebuchet.'
}
updateStatus(updateStatusMessage)
for feature in releaseMessage['release']['features']:
print('Downloading c2j model files for ' + feature['serviceId'])
response = requests.get(feature['c2jModels'])
if response.status_code != 200:
raise Exception('Error downloading c2j model with feature: ' + feature['featureArn'])
with open(os.path.join('/tmp', 'models.tmp.zip'), 'wb') as c2jModelsZipFile:
c2jModelsZipFile.write(response.content)
archive = zipfile.ZipFile(os.path.join('/tmp', 'models.tmp.zip'), 'r')
archive.debug = 3
for info in archive.infolist():
print(' ' + info.filename)
if re.match(r'output/.*\.normal\.json', info.filename):
outputPath = os.path.join(MODELS_OUTPUT_PATH, os.path.basename(info.filename))
print('* copying {0} to {1}'.format(info.filename, outputPath))
fileHandle = archive.open(info.filename, 'r')
fileOutput = fileHandle.read()
with open(outputPath, 'wb') as destination:
destination.write(fileOutput)
fileHandle.close()
releaseNotes = feature['releaseNotes']
print('Append release notes for ' + feature['serviceId'])
with open(os.path.join(OUTPUT_PATH, RELEASE_NOTES_FILENAME), 'a') as releaseNotesFile:
releaseNotesFile.write(releaseNotes + '\n\n')
updateStatusMessage['messageToTrebuchet']['statusMessage'] = 'Step 0 of 4. Handled release notification from Trebuchet.'
updateStatus(updateStatusMessage)
print('Archiving release-message, release-id, release-notes, and models directory into a zip file.')
shutil.make_archive('/tmp/models', 'zip', OUTPUT_PATH)
print('[S3] Sending zip file including json file to S3://{0}/{1}.'.format(S3_BUCKET_NAME, PIPELINE_SOURCE))
response = s3Resource.meta.client.upload_file('/tmp/models.zip', S3_BUCKET_NAME, PIPELINE_SOURCE)
print('Response:', end = ' ')
print(response)
except Exception:
traceback.print_exc()
updateStatusMessage['internalMessage'] = traceback.format_exc()
updateStatusMessage['messageToTrebuchet']['releaseState'] = 'Blocked'
updateStatusMessage['messageToTrebuchet']['statusMessage'] = 'Step 0 of 4. Failed to handle release notification from Trebuchet.'
updateStatus(updateStatusMessage)
def updateStatus(releaseStatus):
print('[Lambda] Triggering Lambda function to update status.')
response = lambdaClient.invoke(
FunctionName = UPDATE_STATUS_LAMBDA_FUNCTION_NAME,
InvocationType = 'RequestResponse',
Payload = json.dumps(releaseStatus)
)
print('Response:', end = ' ')
print(response)
# lambda_handler('', '')

View File

@@ -0,0 +1,29 @@
import os
import json
from botocore.vendored import requests
CHIME_BOT_URL = os.environ['CHIME_BOT_URL']
TREBUCHET_RELEASE_PIPELINE_NAME = os.environ['TREBUCHET_RELEASE_PIPELINE_NAME']
SOURCE_STAGE_NAME = os.environ['SOURCE_STAGE_NAME']
PROD_STAGE_NAME = os.environ['PROD_STAGE_NAME']
def lambda_handler(event, context):
print('Received Event: ' + json.dumps(event))
message = json.loads(event['Records'][0]['Sns']['Message'])
pipeline = message['detail']['pipeline']
stage = message['detail']['stage']
state = message['detail']['state']
if (state == 'SUCCEEDED' and pipeline == TREBUCHET_RELEASE_PIPELINE_NAME and (stage == SOURCE_STAGE_NAME or stage == PROD_STAGE_NAME)) or (state == 'FAILED'):
headers = {'Content-Type': 'application/json'}
data = {}
data['Content'] = '/md {mentionAll}\nPipeline: {pipeline}\nStage: {stage}\nState: {state}'.format(
mentionAll = '@All' if state == 'FAILED' else '',
pipeline = pipeline,
stage = stage,
state = state)
print('[Chime] Sending message to Chime Bot: ' + json.dumps(data['Content']))
respone = requests.post(CHIME_BOT_URL, headers = headers, data = json.dumps(data))
print('Response:', end=' ')
print(respone)

View File

@@ -0,0 +1,122 @@
# Whenever you make any change here, you should update it in Amazon S3.
# This Lambda function will make notifications to:
# 1. SQS queue to update status with Trebuchet
# 2. ChimeBot to notify engineers in the Chime room
# 3. CloudWatch metrics to trigger alarms and cut tickets
# Expected inputs of this Lambda function:
# {
# "stageName": "HandleTrebuchetReleaseNotification|RegenerateCode|Build|IntegrationTests|PublishToGithub",
# "internalMessage": "",
# "internalOnly": True|False
# "messageToTrebuchet": {
# "releaseId" : "",
# "language" : "CPP",
# "releaseState" : "InProgress|Success|Blocked|Failed",
# "statusMessage" : "",
# "additionalDetails" : {
# "generatedCodePresignedUrl":"",
# "logPresignedUrl":""
# }
# }
# }
import os
import json
import boto3
import traceback
from botocore.vendored import requests
CHIME_BOT_URL = os.environ['CHIME_BOT_URL']
TREBUCHET_QUEUE_URL = os.environ['TREBUCHET_QUEUE_URL']
sqsClient = boto3.client('sqs')
cloudwatchClient = boto3.client('cloudwatch')
def lambda_handler(event, context):
print('Received Event: ' + json.dumps(event))
if 'stageName' not in event or event['stageName'] == "":
event['stageName'] = 'Unknown'
if 'internalMessage' not in event:
event['internalMessage'] = ''
if 'internalOnly' not in event:
event['internalOnly'] = False
try:
failure = 0.0
sendMessageToChimeBot = False
mentionAll = False
if 'messageToTrebuchet' not in event or 'releaseId' not in event['messageToTrebuchet'] or event['messageToTrebuchet']['releaseId'] == "":
raise Exception('Missing releaseId in the received release message.')
messageToTrebuchet = event['messageToTrebuchet']
if messageToTrebuchet['releaseState'] == 'InProgress' or messageToTrebuchet['releaseState'] == 'Success':
pass
elif messageToTrebuchet['releaseState'] == 'Blocked' or messageToTrebuchet['releaseState'] == 'Failed':
failure = 1.0
sendMessageToChimeBot = True
mentionAll = True
else:
failure = 1.0
sendMessageToChimeBot = True
mentionAll = True
event['internalMessage'] = ('{originalInternalMessage} releaseState ({releaseState}) should be one of these: InProgress|Success|Blocked|Failed, this build will be marked as Blocked.'.format(
originalInternalMessage = event['internalMessage'],
releaseState = messageToTrebuchet['releaseState']
)).strip()
if not event['internalOnly']:
notifyTrebuchetSQS(messageToTrebuchet)
notifyCloudWatch(failure)
if sendMessageToChimeBot:
notifyChimeBot(event['stageName'], event['internalMessage'], mentionAll)
except Exception:
traceback.print_exc()
notifyChimeBot(
stageName = event['stageName'],
message = '\n'.join([event['internalMessage'], traceback.format_exc()]).strip(),
mentionAll = True)
if 'messageToTrebuchet' in event and 'releaseId' in event['messageToTrebuchet'] and not event['messageToTrebuchet']['releaseId'] == "":
notifyTrebuchetSQS({
"releaseId" : event['messageToTrebuchet']['releaseId'],
"language" : "CPP",
"releaseState" : "Blocked",
"statusMessage" : "Encountered internal errors."
})
def notifyChimeBot(stageName, message, mentionAll = False):
headers = {'Content-Type': 'application/json'}
data = {}
data['Content'] = '/md {mentionAll}\nStage: {stageName}\nMessage: {message}'.format(
mentionAll = '@All' if mentionAll else '',
stageName = stageName,
message = message)
print('[Chime] Sending message to Chime Bot: ' + json.dumps(data['Content']))
respone = requests.post(CHIME_BOT_URL, headers = headers, data = json.dumps(data))
print('Response:', end=' ')
print(respone)
def notifyCloudWatch(value):
print('[CloudWatch] Puting data to Metric: BuildFailure with value: ' + str(value))
response = cloudwatchClient.put_metric_data(
Namespace='BuildPipeline',
MetricData=[{
'MetricName' : "BuildFailure",
'Value' : value,
'Unit' : 'Count',
'StorageResolution' : 60
}]
)
print('Response:', end=' ')
print(response)
def notifyTrebuchetSQS(message):
print('[SQS] Sending message to Trebuchet queue:', end=' ')
print(message)
response = sqsClient.send_message(
QueueUrl = TREBUCHET_QUEUE_URL,
MessageBody = json.dumps(message),
MessageGroupId = 'CppSdkRelease'
)
print('Response:', end=' ')
print(response)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,329 @@
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0.
#
cmake_minimum_required (VERSION 3.1)
if(POLICY CMP0028)
cmake_policy(SET CMP0028 NEW)
endif()
if(POLICY CMP0048)
cmake_policy(SET CMP0048 NEW)
endif()
if(POLICY CMP0054)
cmake_policy(SET CMP0054 NEW)
endif()
if(POLICY CMP0056)
cmake_policy(SET CMP0056 NEW)
endif()
if(POLICY CMP0057)
cmake_policy(SET CMP0057 NEW) # support IN_LIST
endif()
# 3.0 or higher is strongly suggested; build settings (target_compile_options/etc...) sometimes do not get propagated properly under certain conditions prior to this version
# Making this a hard requirement is potentially disruptive to existing customers who aren't affected by the bad behavior though, so just warn for now
if(CMAKE_MAJOR_VERSION LESS 3)
message(WARNING "Building with CMake 3.0 or higher is strongly suggested; current version is ${CMAKE_MAJOR_VERSION}.${CMAKE_MINOR_VERSION}.${CMAKE_PATCH_VERSION}")
endif()
get_filename_component(AWS_NATIVE_SDK_ROOT "${CMAKE_CURRENT_SOURCE_DIR}" ABSOLUTE)
# git is required for Android builds and building third-party dependencies
find_package(Git)
# Cmake invocation variables:
# CUSTOM_MEMORY_MANAGEMENT - if set to ON, generates the sdk project files with custom memory management enabled, otherwise disables it
# BUILD_ONLY - only build project identified by this variable, a semi-colon delimited list, if this is set we will build only the projects listed. Core will always be built as will its unit tests.
# Also if a high level client is specified then we will build its dependencies as well. If a project has tests, the tests will be built.
# REGENERATE_CLIENTS - all clients being built on this run will be regenerated from the api definitions, this option involves some setup of python, java 8, jdk 1.8, and maven
# ADD_CUSTOM_CLIENTS - semi-colon delimited list of format serviceName=<yourserviceName>,version=<theVersionNumber>;serviceName2=<yourOtherServiceName>,version=<versionNumber2>
# to use these arguments, you should add the api definition .normal.json file for your service to the api-description folder in the generator.
# NDK_DIR - directory where the android NDK is installed; if not set, the location will be read from the ANDROID_NDK environment variable
# CUSTOM_PLATFORM_DIR - directory where custom platform scripts, modules, and source resides
# AWS_SDK_ADDITIONAL_LIBRARIES - names of additional libraries to link into aws-cpp-sdk-core in order to support unusual/unanticipated linking setups (static curl against static-something-other-than-openssl for example)
# TODO: convert boolean invocation variables to options
option(ENABLE_UNITY_BUILD "If enabled, the SDK will be built using a single unified .cpp file for each service library. Reduces the size of static library binaries on Windows and Linux" ON)
option(MINIMIZE_SIZE "If enabled, the SDK will be built via a unity aggregation process that results in smaller static libraries; additionally, release binaries will favor size optimizations over speed" OFF)
option(BUILD_SHARED_LIBS "If enabled, all aws sdk libraries will be build as shared objects; otherwise all Aws libraries will be built as static objects" ON)
option(FORCE_SHARED_CRT "If enabled, will unconditionally link the standard libraries in dynamically, otherwise the standard library will be linked in based on the BUILD_SHARED_LIBS setting" ON)
option(SIMPLE_INSTALL "If enabled, removes all the additional indirection (platform/cpu/config) in the bin and lib directories on the install step" ON)
option(NO_HTTP_CLIENT "If enabled, no platform-default http client will be included in the library. For the library to be used you will need to provide your own platform-specific implementation" OFF)
option(NO_ENCRYPTION "If enabled, no platform-default encryption will be included in the library. For the library to be used you will need to provide your own platform-specific implementations" OFF)
option(USE_IXML_HTTP_REQUEST_2 "If enabled on windows, the com object IXmlHttpRequest2 will be used for the http stack" OFF)
option(ENABLE_RTTI "Flag to enable/disable rtti within the library" ON)
option(ENABLE_TESTING "Flag to enable/disable building unit and integration tests" ON)
option(AUTORUN_UNIT_TESTS "Flag to enable/disable automatically run unit tests after building" ON)
option(ANDROID_BUILD_CURL "When building for Android, should curl be built as well" ON)
option(ANDROID_BUILD_OPENSSL "When building for Android, should Openssl be built as well" ON)
option(ANDROID_BUILD_ZLIB "When building for Android, should Zlib be built as well" ON)
option(FORCE_CURL "Forces usage of the Curl client rather than the default OS-specific api" OFF)
option(ENABLE_ADDRESS_SANITIZER "Flags to enable/disable Address Sanitizer for gcc or clang" OFF)
option(BYPASS_DEFAULT_PROXY "Bypass the machine's default proxy settings when using IXmlHttpRequest2" ON)
option(BUILD_DEPS "Build third-party dependencies" ON)
option(ENABLE_CURL_LOGGING "If enabled, Curl's internal log will be piped to SDK's logger" ON)
option(ENABLE_HTTP_CLIENT_TESTING "If enabled, corresponding http client test suites will be built and run" OFF)
option(ENABLE_VIRTUAL_OPERATIONS "This option usually works with REGENERATE_CLIENTS. \
If enabled when doing code generation, operation related functions in service clients will be marked as virtual. \
If disabled when doing code generation, virtual will not be added to operation functions and service client class will be marked as final. \
If disabled, SDK will add compiler flags '-ffunction-sections -fdata-sections' for gcc and clang when compiling. \
You can utilize this feature to work with your linker to reduce binary size of your application on Unix platforms when doing static linking in Release mode." ON)
set(BUILD_ONLY "" CACHE STRING "A semi-colon delimited list of the projects to build")
set(CPP_STANDARD "11" CACHE STRING "Flag to upgrade the C++ standard used. The default is 11. The minimum is 11.")
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release)
endif()
#From https://stackoverflow.com/questions/18968979/how-to-get-colorized-output-with-cmake
if(NOT WIN32)
string(ASCII 27 Esc)
set(ColourReset "${Esc}[m")
set(ColourBold "${Esc}[1m")
set(Red "${Esc}[31m")
set(Green "${Esc}[32m")
set(Yellow "${Esc}[33m")
set(Blue "${Esc}[34m")
set(Magenta "${Esc}[35m")
set(Cyan "${Esc}[36m")
set(White "${Esc}[37m")
set(BoldRed "${Esc}[1;31m")
set(BoldGreen "${Esc}[1;32m")
set(BoldYellow "${Esc}[1;33m")
set(BoldBlue "${Esc}[1;34m")
set(BoldMagenta "${Esc}[1;35m")
set(BoldCyan "${Esc}[1;36m")
set(BoldWhite "${Esc}[1;37m")
endif()
# backwards compatibility with old command line params
if("${STATIC_LINKING}" STREQUAL "1")
set(BUILD_SHARED_LIBS OFF)
endif()
if(MINIMIZE_SIZE)
message(STATUS "MINIMIZE_SIZE enabled")
set(ENABLE_UNITY_BUILD ON) # MINIMIZE_SIZE always implies UNITY_BUILD
endif()
set(PYTHON_CMD "python")
# CMAKE_MODULE_PATH is a CMAKE variable. It contains a list of paths
# which could be used to search CMAKE modules by "include()" or "find_package()", but the default value is empty.
# Add cmake dir to search list
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake")
# include() will "load and run" cmake script
include(resolve_platform)
include(CMakePackageConfigHelpers)
if (REGENERATE_CLIENTS AND NOT ENABLE_VIRTUAL_OPERATIONS)
if (PLATFORM_LINUX OR PLATFORM_APPLE)
Message(STATUS "${BoldYellow}You are regenerating service client's source code and is turning ENABLE_VIRTUAL_OPERATIONS off. If you are targeting smaller binary size, read description string of ENABLE_VIRTUAL_OPERATIONS.${ColourReset}")
endif()
endif()
# use response files to prevent command-line-too-big errors for large libraries like iam
set(CMAKE_CXX_USE_RESPONSE_FILE_FOR_OBJECTS 1)
set(CMAKE_CXX_USE_RESPONSE_FILE_FOR_INCLUDES 1)
set(CMAKE_CXX_RESPONSE_FILE_LINK_FLAG "@")
if(COMMAND apply_pre_project_platform_settings)
apply_pre_project_platform_settings()
endif()
include(initialize_project_version)
if (BUILD_SHARED_LIBS OR FORCE_SHARED_CRT)
set(STATIC_CRT OFF)
else()
set(STATIC_CRT ON)
endif()
# Add Linker search paths to RPATH so as to fix the problem where some linkers can't find cross-compiled depenpent libraries in customer paths when linking executables.
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH true)
# build third-party targets
if (BUILD_DEPS)
# If building third party dependencies, we will move them to the same directory where SDK has been installed during installation.
# Therefore, we should set rpath to $ORIGIN to let SDK find these third party dependencies.
# Otherwise, customers are responsible for handling the linkage to these libraries.
set(CMAKE_INSTALL_RPATH "$ORIGIN")
set(AWS_DEPS_BUILD_DIR ${CMAKE_CURRENT_BINARY_DIR}/.deps)
if (NOT DEFINED AWS_DEPS_INSTALL_DIR)
if (DEFINED CMAKE_INSTALL_PREFIX)
set(AWS_DEPS_INSTALL_DIR ${CMAKE_INSTALL_PREFIX} CACHE STRING "A string describes the path where 3rd-party dependencies will be or have been installed")
else()
set(AWS_DEPS_INSTALL_DIR ${CMAKE_CURRENT_BINARY_DIR}/.deps/install CACHE STRING "A string describes the path where 3rd-party dependencies will be or have been installed")
endif()
endif()
if (NOT CMAKE_GENERATOR_PLATFORM STREQUAL "")
set(GEN_PLATFORM_ARG "-A${CMAKE_GENERATOR_PLATFORM}")
endif()
file(MAKE_DIRECTORY ${AWS_DEPS_BUILD_DIR})
if(TARGET_ARCH STREQUAL "ANDROID")
execute_process(
COMMAND ${CMAKE_COMMAND} -G ${CMAKE_GENERATOR}
-DTARGET_ARCH=${TARGET_ARCH}
-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
-DCMAKE_ANDROID_NDK_TOOLCHAIN_VERSION=${CMAKE_ANDROID_NDK_TOOLCHAIN_VERSION}
-DANDROID_NATIVE_API_LEVEL=${ANDROID_NATIVE_API_LEVEL}
-DANDROID_ABI=${ANDROID_ABI}
-DANDROID_TOOLCHAIN=${ANDROID_TOOLCHAIN}
-DANDROID_STL=${ANDROID_STL}
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DBUILD_SHARED_LIBS=${BUILD_SHARED_LIBS}
-DCMAKE_INSTALL_PREFIX=${AWS_DEPS_INSTALL_DIR}
-DGIT_EXECUTABLE=${GIT_EXECUTABLE}
${GEN_PLATFORM_ARG}
${CMAKE_CURRENT_SOURCE_DIR}/third-party
WORKING_DIRECTORY ${AWS_DEPS_BUILD_DIR}
RESULT_VARIABLE BUILD_3P_EXIT_CODE)
elseif(TARGET_ARCH STREQUAL "APPLE" AND DEFINED CMAKE_OSX_ARCHITECTURES AND NOT CMAKE_OSX_ARCHITECTURES STREQUAL "")
message("Cross compiling third-party dependencies for architecture ${CMAKE_OSX_ARCHITECTURES}")
execute_process(
COMMAND ${CMAKE_COMMAND} -G ${CMAKE_GENERATOR}
-DTARGET_ARCH=${TARGET_ARCH}
-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DBUILD_SHARED_LIBS=${BUILD_SHARED_LIBS}
-DCMAKE_INSTALL_PREFIX=${AWS_DEPS_INSTALL_DIR}
-DCMAKE_OSX_SYSROOT=${CMAKE_OSX_SYSROOT}
-DCMAKE_OSX_ARCHITECTURES=${CMAKE_OSX_ARCHITECTURES}
-DCMAKE_SYSTEM_NAME=${CMAKE_SYSTEM_NAME}
-DCMAKE_C_FLAGS=${CMAKE_C_FLAGS}
-DCMAKE_RUNTIME_OUTPUT_DIRECTORY=${CMAKE_CURRENT_BINARY_DIR}/bin
${GEN_PLATFORM_ARG}
${CMAKE_CURRENT_SOURCE_DIR}/third-party
WORKING_DIRECTORY ${AWS_DEPS_BUILD_DIR}
RESULT_VARIABLE BUILD_3P_EXIT_CODE)
else()
execute_process(
COMMAND ${CMAKE_COMMAND} -G ${CMAKE_GENERATOR}
-DTARGET_ARCH=${TARGET_ARCH}
-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DBUILD_SHARED_LIBS=${BUILD_SHARED_LIBS}
-DSTATIC_CRT=${STATIC_CRT}
-DCMAKE_INSTALL_PREFIX=${AWS_DEPS_INSTALL_DIR}
-DCMAKE_RUNTIME_OUTPUT_DIRECTORY=${CMAKE_CURRENT_BINARY_DIR}/bin
${GEN_PLATFORM_ARG}
${CMAKE_CURRENT_SOURCE_DIR}/third-party
WORKING_DIRECTORY ${AWS_DEPS_BUILD_DIR}
RESULT_VARIABLE BUILD_3P_EXIT_CODE)
endif()
if (NOT ${BUILD_3P_EXIT_CODE} EQUAL 0)
message(FATAL_ERROR "Failed to configure third-party libraries.")
endif()
execute_process(COMMAND ${CMAKE_COMMAND} --build ${AWS_DEPS_BUILD_DIR} --config ${CMAKE_BUILD_TYPE}
RESULT_VARIABLE BUILD_3P_EXIT_CODE)
if (NOT ${BUILD_3P_EXIT_CODE} EQUAL 0)
message(FATAL_ERROR "Failed to build third-party libraries.")
endif()
message(STATUS "Third-party dependencies are installed at: ${AWS_DEPS_INSTALL_DIR}")
list(APPEND CMAKE_PREFIX_PATH "${AWS_DEPS_INSTALL_DIR}")
endif()
set(THIRD_PARTY_LIBS "aws-c-event-stream;aws-checksums;aws-c-common")
# build the sdk targets
project("aws-cpp-sdk-all" VERSION "${PROJECT_VERSION}" LANGUAGES CXX)
# http client, encryption, zlib
include(external_dependencies)
if(COMMAND apply_post_project_platform_settings)
apply_post_project_platform_settings()
endif()
set(CMAKE_CONFIGURATION_TYPES
Debug # Setup for easy debugging. No optimizations.
DebugOpt # An optimized version of Debug.
Release # Fully optimized, no debugging information.
RelWithDebInfo # A debuggable version of Release.
MinSizeRel # Like Release, but optimized for memory rather than speed.
)
include(compiler_settings)
# Instead of calling functions/macros inside included cmake scripts, we should call them in our main CMakeList.txt
set_msvc_flags()
set_msvc_warnings()
include(sdks)
include(utilities)
include(build_external)
if(ENABLE_BCRYPT_ENCRYPTION)
set(CRYPTO_LIBS Bcrypt)
set(CRYPTO_LIBS_ABSTRACT_NAME Bcrypt)
elseif(ENABLE_OPENSSL_ENCRYPTION)
set(CRYPTO_LIBS ${OPENSSL_LIBRARIES} ${ZLIB_LIBRARIES})
set(CRYPTO_LIBS_ABSTRACT_NAME crypto ssl z)
endif()
if(ENABLE_CURL_CLIENT)
set(CLIENT_LIBS ${CURL_LIBRARIES})
set(CLIENT_LIBS_ABSTRACT_NAME curl)
elseif(ENABLE_WINDOWS_CLIENT)
if(USE_IXML_HTTP_REQUEST_2)
set(CLIENT_LIBS msxml6 runtimeobject)
set(CLIENT_LIBS_ABSTRACT_NAME msxml6 runtimeobject)
if(BYPASS_DEFAULT_PROXY)
list(APPEND CLIENT_LIBS winhttp)
list(APPEND CLIENT_LIBS_ABSTRACT_NAME winhttp)
endif()
else()
set(CLIENT_LIBS Wininet winhttp)
set(CLIENT_LIBS_ABSTRACT_NAME Wininet winhttp)
endif()
endif()
# setup user specified installation directory if any, regardless previous platform default settings
if (CMAKE_INSTALL_BINDIR)
set(BINARY_DIRECTORY "${CMAKE_INSTALL_BINDIR}")
endif()
if (CMAKE_INSTALL_LIBDIR)
set(LIBRARY_DIRECTORY "${CMAKE_INSTALL_LIBDIR}")
endif()
if (CMAKE_INSTALL_INCLUDEDIR)
set(INCLUDE_DIRECTORY "${CMAKE_INSTALL_INCLUDEDIR}")
endif()
if(BUILD_SHARED_LIBS)
set(ARCHIVE_DIRECTORY "${BINARY_DIRECTORY}")
else()
set(ARCHIVE_DIRECTORY "${LIBRARY_DIRECTORY}")
endif()
if (ENABLE_ADDRESS_SANITIZER)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=address -g -fno-omit-frame-pointer")
endif()
include(CheckCXXSymbolExists)
check_cxx_symbol_exists("pathconf" "unistd.h" HAS_PATHCONF)
if (HAS_PATHCONF)
add_definitions(-DHAS_PATHCONF)
endif()
check_cxx_symbol_exists("umask" "sys/stat.h" HAS_UMASK)
if (HAS_UMASK)
add_definitions(-DHAS_UMASK)
endif()
add_sdks()
# for user friendly cmake usage
include(setup_cmake_find_module)
# for generating make uninstall target
if (NOT TARGET uninstall)
ADD_CUSTOM_TARGET(uninstall "${CMAKE_COMMAND}" -P "${AWS_NATIVE_SDK_ROOT}/cmake/make_uninstall.cmake")
else()
ADD_CUSTOM_TARGET(uninstall-awssdk "${CMAKE_COMMAND}" -P "${AWS_NATIVE_SDK_ROOT}/cmake/make_uninstall.cmake")
endif()

View File

@@ -0,0 +1,4 @@
## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
opensource-codeofconduct@amazon.com with any additional questions or comments.

View File

@@ -0,0 +1,100 @@
## Contributing Back
**Please Do!**
__Jump To:__
* [Bug Reports](#bug-reports)
* [Feature Requests](#feature-requests)
* [Code Contributions](#code-contributions)
* [Contribution Guidelines](#Contribution-Guidelines)
## Bug Reports
Bug reports are accepted through the [Issues][issues] page.
### Before Submitting a Bug Report
Before submitting a bug report, please do the following:
1. Do a search through the existing issues to make sure it has not already been reported. If there's an existing one, be sure give a +1 reaction which will help us prioritize which issues to address first.
2. If possible, upgrade to the latest release of the SDK. The SDK has a near daily release cadence so it's possible the bug has already been fixed in the latest version.
If, after doing the above steps, you determine that you need to submit a bug report, refer to the next section.
### Submitting a Bug Report
So that we are able to assist you as effectively as possible with the issue, please ensure that your bug report has the following:
* A short, descriptive title. Ideally, other community members should be able to get a good idea of the issue just from reading the title.
* A succint, detailed description of the problem you're experiencing. This should include:
* Expected behavior of the SDK and the actual behavior exhibited.
* Any details of your application environment that may be relevant. At minimum, this should include the __SDK version__ and __Operating System__ you're using.
* If applicable, the exception stacktrace.
* If you are able to create one, include a [Minimal Working Example][mwe] that reproduces the issue.
* [Markdown][markdown] formatting as appropriate to make the report easier to read; for example use code blocks when pasting a code snippet and exception stacktraces.
## Feature Requests
Like bug reports, feature requests are submitted through the [Issues][issues] page.
As with Bug Reports, please do a search of the open requests first before submitting a new one to avoid duplicates. If you find an existing one, give it a +1.
__NOTE:__ If this is a feature you intend to implement, please submit the feature request *before* working on any code changes. This will allow members on the SDK team to have a discussion with you to ensure that it's the right design and that it makes sense to include in the SDK. Keep in mind that other concerns like source and binary compatibility will also play a deciding factor.
### Submitting a Feature Request
Open an [issue][issues] with the following:
* A short, descriptive title. s should be able to get a good idea of the feature just from reading the title.Ideally, other community member
* A detailed description of the the proposed feature. Include justification for why it should be added to the SDK, and possibly example code to illustrate how it should work.
* [Markdown][markdown] formatting as appropriate to make the request easier to read.
* If you intend to implement this feature, indicate that you'd like to the issue to be assigned to you
## Code Contributions
Code contributions to the SDK are done through [Pull Requests][pull-requests]. Please keep the following in mind when considering a code contribution:
* The SDK is released under the [Apache 2.0 License][license].
Any code you submit will be released under this license. If you are contributing a large/substantial feature, you may be asked to sign a Contributor License Agreement (CLA).
* For anything but very small or quick changes, you should always start by checking the [Issues][issues] page to see if the work is already being done by another person.
If you're working on a bug fix, check to see if the bug has already been reported. If it has but no one is assigned to it, ask one of the maintainers to assign it to you before beginning work. If you're confident the bug hasn't been reported yet, create a new [Bug Report](#bug-reports) then ask to be assigned to it.
If you are thinking about adding entirely new functionality, open a [Feature Request](#feature-requests) first before beginning work; again this is to make sure that no one else is already working on it, and also that it makes sense to be included in the SDK.
* All code contributions must be accompanied with new or modified tests that verify that the code works as expected; i.e. that the issue has been fixed or that the functionality works as intended.
## Your First Code Change
Before submitting your pull request, refer to the pull request readiness
checklist below:
* [ ] Includes tests to exercise the new behavior
* [ ] Code is documented, especially public and user-facing constructs
* [ ] Git commit message is detailed and includes context behind the change
* [ ] If the change is related to an existing Bug Report or Feature Request, the issue number is referenced
__Note__: Some changes have additional requirements. Refer to the section below
to see if your change will require additional work to be accepted.
All Pull Requests must be approved by at least one member of the SDK team before it can be merged in. The members only have limited bandwitdth to review Pull Requests so it's not unusual for a Pull Request to go unreviewed for a few days, especially if it's a large or complex one. If, after a week, your Pull Request has not had any engagement from the SDK team, feel free to comment and tag a member to ask for a review.
If your branch has more than one commit when it's approved, you will also be asked to squash them into a single commit before it is merged in.
## Contribution Guidelines
* Don't make changes to generated clients directly, make your changes in the generator. Changes to Core, Scripts, and High-Level interfaces are fine directly in the code.
* Do not use non-trivial statics anywhere. This will cause custom memory managers to crash in random places.
* Use 4 spaces for indents and never use tabs.
* No exceptions.... no exceptions. Use the Outcome pattern for returning data if you need to also return an optional error code.
* Always think about platform independence. If this is impossible, put a nice abstraction on top of it and use an abstract factory.
* Use RAII, Aws::New and Aws::Delete should only appear in constructors and destructors.
* Be sure to follow the rule of 5.
* Use the C++ 11 standard where possible.
* Use UpperCamelCase for custom type names and function names. Use m_* for member variables. Don't use statics. If you must, use UpperCammelCase for static variables
* Always be const correct, and be mindful of when you need to support r-values. We don't trust compilers to optimize this uniformly accross builds so please be explicit.
* Namespace names should be UpperCammelCase. Never put a using namespace statement in a header file unless it is scoped by a class. It is fine to use a using namespace statement in a cpp file.
* Use enum class, not enum
* Prefer `#pragma once` for include guards.
* Forward declare whenever possible.
* Use nullptr instead of NULL.
[license]: ./LICENSE.txt
[mwe]: https://en.wikipedia.org/wiki/Minimal_Working_Example
[markdown]: https://guides.github.com/features/mastering-markdown/
[issues]: https://github.com/aws/aws-sdk-cpp/issues
[pull-requests]: https://github.com/aws/aws-sdk-cpp/pulls

View File

@@ -0,0 +1,123 @@
# Advance Topics and tips
__This section includes the following topics:__
* [Uninstalling (auto build only)](#Uninstalling)
* [Overriding Your HTTP Client](#Overriding-your-Http-Client)
* [Error Handling](#Error-Handling)
* [Provided Utilities](#provided-utilities)
* [Controlling IOStreams used by the HttpClient and the AWSClient](#Controlling-IOStreams-used-by-the-HttpClient-and-the-AWSClient)
### Uninstalling:
To uninstall these libraries:
```sh
sudo make uninstall
```
You may define a custom uninstall target when you are using SDK as a sub-project, but make sure it comes before the default definition in `CMakeLists.txt`, and you can uninstall SDK related libraries by:
```sh
sudo make uninstall-awssdk
```
### Overriding your Http Client
The default HTTP client for Windows is WinHTTP. The default HTTP client for all other platforms is Curl. If needed, you can create a custom HttpClientFactory, add it to the SDKOptions object which you pass to Aws::InitAPI().
### Error Handling
We do not use exceptions; however, you can use exceptions in your code. Every service client returns an outcome object that includes the result and an error code.
Example of handling error conditions:
```cpp
bool CreateTableAndWaitForItToBeActive()
{
CreateTableRequest createTableRequest;
AttributeDefinition hashKey;
hashKey.SetAttributeName(HASH_KEY_NAME);
hashKey.SetAttributeType(ScalarAttributeType::S);
createTableRequest.AddAttributeDefinitions(hashKey);
KeySchemaElement hashKeySchemaElement;
hashKeySchemaElement.WithAttributeName(HASH_KEY_NAME).WithKeyType(KeyType::HASH);
createTableRequest.AddKeySchema(hashKeySchemaElement);
ProvisionedThroughput provisionedThroughput;
provisionedThroughput.SetReadCapacityUnits(readCap);
provisionedThroughput.SetWriteCapacityUnits(writeCap);
createTableRequest.WithProvisionedThroughput(provisionedThroughput);
createTableRequest.WithTableName(tableName);
CreateTableOutcome createTableOutcome = dynamoDbClient->CreateTable(createTableRequest);
if (createTableOutcome.IsSuccess())
{
DescribeTableRequest describeTableRequest;
describeTableRequest.SetTableName(tableName);
bool shouldContinue = true;
DescribeTableOutcome outcome = dynamoDbClient->DescribeTable(describeTableRequest);
while (shouldContinue)
{
if (outcome.GetResult().GetTable().GetTableStatus() == TableStatus::ACTIVE)
{
break;
}
else
{
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
return true
}
else if(createTableOutcome.GetError().GetErrorType() == DynamoDBErrors::RESOURCE_IN_USE)
{
return true;
}
return false;
}
```
### Provided Utilities
The provided utilities include HTTP stack, string utils, hashing utils, JSON parser, and XML parser.
##### HTTP Stack
/aws/core/http/
The HTTP client provides connection pooling, is thread safe, and can be reused for your purposes. See the Client Configuration section above.
##### String Utils
/aws/core/utils/StringUtils.h
This header file provides core string functions, such as trim, lowercase, and numeric conversions.
##### Hashing Utils
/aws/core/utils/HashingUtils.h
This header file provides hashing functions, such as SHA256, MD5, Base64, and SHA256_HMAC.
##### Cryptography
/aws/core/utils/crypto/Cipher.h
/aws/core/utils/crypto/Factories.h
This header file provides access to secure random number generators, AES symmetric ciphers in CBC, CTR, and GCM modes, and the underlying Hash implementations that are used in HashingUtils.
##### JSON Parser
/aws/core/utils/json/JsonSerializer.h
This header file provides a fully functioning yet lightweight JSON parser (thin wrapper around JsonCpp).
##### XML Parser
/aws/core/utils/xml/XmlSerializer.h
This header file provides a lightweight XML parser (thin wrapper around tinyxml2). RAII pattern has been added to the interface.
### Controlling IOStreams used by the HttpClient and the AWSClient
By default all responses use an input stream backed by a stringbuf. If needed, you can override the default behavior. For example, if you are using Amazon S3 GetObject and do not want to load the entire file into memory, you can use IOStreamFactory in AmazonWebServiceRequest to pass a lambda to create a file stream.
Example file stream request:
```cpp
GetObjectRequest getObjectRequest;
getObjectRequest.SetBucket(fullBucketName);
getObjectRequest.SetKey(keyName);
getObjectRequest.SetResponseStreamFactory([](){ return Aws::New<Aws::FStream>( ALLOCATION_TAG, DOWNLOADED_FILENAME, std::ios_base::out ); });
auto getObjectOutcome = s3Client->GetObject(getObjectRequest);
```

View File

@@ -0,0 +1,105 @@
# CMake Parameters
## General CMake Variables/Options
CMake options are variables that can either be ON or OFF, with a controllable default. You can set an option either with CMake Gui tools or the command line via -D.
### BUILD_ONLY
Allows you to only build the clients you want to use. This will resolve low level client dependencies if you set this to a high-level sdk such as aws-cpp-sdk-transfer. This will also build integration and unit tests related to the projects you select if they exist. aws-cpp-sdk-core always builds regardless of the value of this argument. This is a list argument.
Example:
```sh
-DBUILD_ONLY="s3;dynamodb;cognito-identity"
```
### ADD_CUSTOM_CLIENTS
Allows you to build any arbitrary clients based on the api definition. Simply place your definition in the code-generation/api-definitions folder. Then pass this arg to cmake. The cmake configure step will generate your client and include it as a subdirectory in your build. This is particularly useful if you want to generate a C++ client for using one of your API Gateway services. To use this feature you need to have python 2.7, java, jdk1.8, and maven installed and in your executable path. Example:
```sh
-DADD_CUSTOM_CLIENTS="serviceName=myCustomService, version=2015-12-21;serviceName=someOtherService, version=2015-08-15"
```
### REGENERATE_CLIENTS
This argument will wipe out all generated code and generate the client directories from the code-generation/api-definitions folder. To use this argument, you need to have python 2.7, java, jdk1.8, and maven installed in your executable path. Example:
```sh
-DREGENERATE_CLIENTS=1
```
### CUSTOM_MEMORY_MANAGEMENT
To use a custom memory manager, set the value to ON. You can install a custom allocator, and all STL types will use the custom allocation interface. If the value is set to OFF, you still might want to use the STL template types to help with DLL safety on Windows.
If static linking is enabled, custom memory management defaults to off. If dynamic linking is enabled, custom memory management defaults to on and avoids cross-DLL allocation and deallocation.
Note: To prevent linker mismatch errors, you must use the same value (ON or OFF) throughout your build system.
### TARGET_ARCH
To cross compile or build for a mobile platform, you must specify the target platform. By default the build detects the host operating system and builds for that operating system.
Options: `WINDOWS | LINUX | APPLE | ANDROID`
### G
Use this variable to generate build artifacts, such as Visual Studio solutions and Xcode projects.
Windows example:
```sh
-G "Visual Studio 12 Win64"
```
For more information, see the CMake documentation for your platform.
### ENABLE_UNITY_BUILD
(Defaults to OFF) If enabled, most SDK libraries will be built as a single, generated .cpp file. This can significantly reduce static library size as well as speed up compilation time.
### MINIMIZE_SIZE
(Defaults to OFF) A superset of ENABLE_UNITY_BUILD, if enabled this option turns on ENABLE_UNITY_BUILD as well as some additional binary size reduction settings. This is a work-in-progress and may change in the future (symbol stripping in particular).
### BUILD_SHARED_LIBS
(Defaults to ON) A built-in CMake option, reexposed here for visibility. If enabled, shared libraries will be built, otherwise static libraries will be built.
### FORCE_SHARED_CRT
(Defaults to ON) If enabled, the SDK will link to the C runtime dynamically, otherwise it will use the BUILD_SHARED_LIBS setting (weird but necessary for backwards compatibility with older versions of the SDK)
### SIMPLE_INSTALL
(Defaults to ON) If enabled, the install process will not insert platform-specific intermediate directories underneath bin/ and lib/. Turn OFF if you need to make multi-platform releases under a single install directory.
### NO_HTTP_CLIENT
(Defaults to OFF) If enabled, prevents the default platform-specific http client from being built into the library. Turn this ON if you wish to inject your own http client implementation.
### NO_ENCRYPTION
(Defaults to OFF) If enabled, prevents the default platform-specific cryptography implementation from being built into the library. Turn this ON if you wish to inject your own cryptography implementation.
### ENABLE_RTTI
(Defaults to ON) Controls whether or not the SDK is built with RTTI information
### CPP_STANDARD
(Defaults to 11) Allows you to specify a custom c++ standard for use with C++ 14 and 17 code-bases
### ENABLE_TESTING
(Defaults to ON) Controls whether or not the unit and integration test projects are built
### ENABLE_VIRTUAL_OPERATIONS
(Defaults to ON) This option usually works with REGENERATE_CLIENTS.
If enabled when doing code generation (REGENERATE_CLIENTS=ON), operation related functions in service clients will be marked as `virtual`.
If disabled when doing code generation (REGENERATE_CLIENTS=ON), `virtual` will not be added to operation functions and service client classes will be marked as final.
If disabled, SDK will also add compiler flags `-ffunction-sections -fdata-sections` for gcc and clang when compiling.
You can utilize this feature to work with your linker to reduce binary size of your application on Unix platforms when doing static linking in Release mode.
For example, if your system uses `ld` as linker, then you can turn this option OFF when building SDK, and specify linker flag `--gc-sections` (or `-dead_strip` on Mac) in your own build scripts.
You can also tell gcc or clang to pass these linker flags by specifying `-Wl,--gc-sections`, or `-Wl,-dead_strip`. Or via `-DCMAKE_CXX_FLAGS="-Wl,[flag]"` if you use CMake.
## Android CMake Variables/Options
### NDK_DIR
An override path for where the build system should find the Android NDK. By default, the build system will check environment variables (ANDROID_NDK) if this CMake variable is not set.
### ANDROID_STL
(Defaults to libc++\_shared) Controls what flavor of the C++ standard library the SDK will use. Valid values are one of {libc++\_shared, libc++\_static, gnustl_shared, gnustl_static}. There are severe performance problems within the SDK if gnustl is used and gnustl was deprecated starting from Android NDK 18, so we recommend libc++.
### ANDROID_ABI
(Defaults to armeabi-v7a) Controls what abi to output code for. Not all valid Android ABI values are currently supported, but we intend to provide full coverage in the future. We welcome patches to our Openssl build wrapper that speed this process up. Valid values are one of {arm64, armeabi-v7a, x86_64, x86, mips64, mips}.
### ANDROID_TOOLCHAIN
(Defaults to clang) Controls which compiler is used to build the SDK. With GCC being deprecated by Android NDK, we recommend using the default (clang).
### ANDROID_NATIVE_API_LEVEL
(Default varies by STL choice) Controls what API level the SDK will be built against. If you use gnustl, you have complete freedom with the choice of API level. If you use libc++, you must use an API level of at least 21.

View File

@@ -0,0 +1,16 @@
# CODING STANDARDS
* Don't make changes to generated clients directly. Make your changes in the generator. Changes to Core, Scripts, and High-Level interfaces are fine directly in the code.
* Do not use non-trivial statics anywhere. This will cause custom memory managers to crash in random places.
* Use 4 spaces for indents and never use tabs.
* No exceptions.... no exceptions. Use the Outcome pattern for returning data if you need to also return an optional error code.
* Always think about platform independence. If this is impossible, put a nice abstraction on top of it and use an abstract factory.
* Use RAII, Aws::New and Aws::Delete should only appear in constructors and destructors.
* Be sure to follow the rule of 5.
* Use the C++ 11 standard where possible.
* Use UpperCamelCase for custom type names and function names. Use m_* for member variables. Don't use statics. If you must, use UpperCammelCase for static variables
* Always be const correct, and be mindful of when you need to support r-values. We don't trust compilers to optimize this uniformly accross builds so please be explicit.
* Namespace names should be UpperCammelCase. Never put a using namespace statement in a header file unless it is scoped by a class. It is fine to use a using namespace statement in a cpp file.
* Use enum class, not enum
* Prefer #pragma once for include guards.
* Forward declare whenever possible.
* Use nullptr instead of NULL.

View File

@@ -0,0 +1,70 @@
# Client Configuration
You can use the client configuration to control most functionality in the AWS SDK for C++.
ClientConfiguration declaration:
```cpp
struct AWS_CORE_API ClientConfiguration
{
ClientConfiguration();
Aws::String userAgent;
Aws::Http::Scheme scheme;
Aws::Region region;
bool useDualStack;
unsigned maxConnections;
long requestTimeoutMs;
long connectTimeoutMs;
std::shared_ptr<RetryStrategy> retryStrategy;
Aws::String endpointOverride;
Aws::Http::Scheme proxyScheme;
Aws::String proxyHost;
unsigned proxyPort;
Aws::String proxyUserName;
Aws::String proxyPassword;
std::shared_ptr<Aws::Utils::Threading::Executor> executor;
bool verifySSL;
Aws::String caPath;
std::shared_ptr<Aws::Utils::RateLimits::RateLimiterInterface> writeRateLimiter;
std::shared_ptr<Aws::Utils::RateLimits::RateLimiterInterface> readRateLimiter;
};
```
### User Agent
The user agent is built in the constructor and pulls information from your operating system. Do not alter the user agent.
### Scheme
The default value for scheme is HTTPS. You can set this value to HTTP if the information you are passing is not sensitive and the service to which you want to connect supports an HTTP endpoint. AWS Auth protects you from tampering.
### Region
The region specifies where you want the client to communicate. Examples include us-east-1 or us-west-1. You must ensure the service you want to use has an endpoint in the region you configure.
### UseDualStack
Sets the endpoint calculation to go to a dual stack (ipv6 enabled) endpoint. It is your responsibility to check that the service actually supports ipv6 in the region you specify.
### Max Connections
The default value for the maximum number of allowed connections to a single server for your HTTP communications is 25. You can set this value as high as you can support the bandwidth. We recommend a value around 25.
### Request Timeout and Connection Timeout
This value determines the length of time, in milliseconds, to wait before timing out a request. You can increase this value if you need to transfer large files, such as in Amazon S3 or CloudFront.
### Retry Strategy
The retry strategy defaults to exponential backoff. You can override this default by implementing a subclass of RetryStrategy and passing an instance.
### Endpoint Override
Do not alter the endpoint.
### Proxy Scheme, Host, Port, User Name, and Password
These settings allow you to configure a proxy for all communication with AWS. Examples of when this functionality might be useful include debugging in conjunction with the Burp suite, or using a proxy to connect to the internet.
### Executor
The default behavior for the executor is to create and detach a thread for each async call. You can change this behavior by implementing a subclass of Executor and passing an instance. We now provide a thread pooled executor as an option. For more information see this blog post: https://aws.amazon.com/blogs/developer/using-a-thread-pool-with-the-aws-sdk-for-c/
### Verify SSL
If necessary, you can disable SSL certificate verification by setting the verify SSL value to false.
### CA Path
You can tell the http client where to find your certificate trust store ( e.g. a directory prepared with OpenSSL c_rehash utility). This should not be necessary unless you are doing some weird symlink farm stuff for your environment. This has no effect on Windows or OSX.
### Write Rate Limiter and Read Rate Limiter
The write and read rate limiters are used to throttle the bandwidth used by the transport layer. The default for these limiters is open. You can use the default implementation with your desired rates, or you can create your own instance by implementing a subclass of RateLimiterInterface.

View File

@@ -0,0 +1,18 @@
# Credentials Providers
You can use the AWSCredentialProvider interface to provide login credentials to AWS Auth. Implement this interface to provide your own method of credentials deployment. We also provide default credential providers.
## Default Credential Provider Chain
The default credential provider chain does the following:
1. Checks your environment variables for AWS Credentials
2. Checks your $HOME/.aws/credentials file for a profile and credentials
3. Contacts and logs in to a trusted identity provider (Cognito, Login with Amazon, Facebook, Google). The sdk looks for the login information to these providers either on the enviroment variables: AWS_ROLE_ARN, AWS_WEB_IDENTITY_TOKEN_FILE, AWS_ROLE_SESSION_NAME. Or on a profile in your $HOME/.aws/credentials.
4. Checks for an external method set as part of a profile on $HOME/.aws/config to generate or look up credentials that isn't directly supported by AWS.
5. Contacts the ECS TaskRoleCredentialsProvider service to request credentials if Environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI has been set.
6. Contacts the EC2MetadataInstanceProfileCredentialsProvider service to request credentials if AWS_EC2_METADATA_DISABLED is NOT set to ON.
The simplest way to communicate with AWS is to ensure we can find your credentials in one of these locations.
## Other Methods
We also support two other methods for providing credentials:
* Provide your credentials in your clients constructor.
* Use Amazon Cognito Identity, which is an identity management solution. You can use the CognitoCaching*CredentialsProviders classes in the identity-management project. For more information, see the *Amazon Cognito Developer Guide*.

View File

@@ -0,0 +1,105 @@
# Memory Management
The AWS SDK for C++ provides a way to control memory allocation and deallocation in a library.
Custom memory management is available only if you use a version of the library built using the compile-time constant AWS_CUSTOM_MEMORY_MANAGEMENT defined.
If you use a version of the library built without the compile-time constant, the global memory system functions such as InitializeAWSMemorySystem will not work and the global new and delete functions will be used instead.
For more information about the compile-time constant, see the STL and AWS Strings and Vectors section in this Readme.
To allocate or deallocate memory:
1. Implement the MemorySystemInterface subclass:
aws/core/utils/memory/MemorySystemInterface.h
In the following example, the type signature for AllocateMemory can be changed as needed:
```cpp
class MyMemoryManager : public Aws::Utils::Memory::MemorySystemInterface
{
public:
// ...
virtual void* AllocateMemory(std::size_t blockSize, std::size_t alignment, const char *allocationTag = nullptr) override;
virtual void FreeMemory(void* memoryPtr) override;
};
```
In Main:
```cpp
int main(void)
{
MyMemoryManager sdkMemoryManager;
SDKOptions options;
options.memoryManagementOptions.memoryManager = &sdkMemoryManager;
Aws::InitAPI(options);
// ... do stuff
Aws::ShutdownAPI(options);
return 0;
}
```
## STL and AWS Strings and Vectors
When initialized with a memory manager, the AWS SDK for C++ defers all allocation and deallocation to the memory manager. If a memory manager does not exist, the SDK uses global new and delete.
If you use custom STL allocators, you must alter the type signatures for all STL objects to match the allocation policy. Because STL is used prominently in the SDK implementation and interface, a single approach in the SDK would inhibit direct passing of default STL objects into the SDK or control of STL allocation. Alternately, a hybrid approach using custom allocators internally and allowing standard and custom STL objects on the interface could potentially cause more difficulty when investigating memory issues.
The solution is to use the memory systems compile-time constant AWS_CUSTOM_MEMORY_MANAGEMENT to control which STL types the SDK will use.
If the compile-time constant is enabled (on), the types resolve to STL types with a custom allocator connected to the AWS memory system.
If the compile-time constant is disabled (off), all Aws::* types resolve to the corresponding default std::* type.
Example code from the AWSAllocator.h file in the SDK:
```cpp
#ifdef AWS_CUSTOM_MEMORY_MANAGEMENT
template< typename T >
class AwsAllocator : public std::allocator< T >
{
... definition of allocator that uses AWS memory system
};
#else
template< typename T > using Allocator = std::allocator<T>;
#endif
```
In the example code, the AwsAllocator can be either a custom allocator or a default allocator, depending on the compile-time constant.
Example code from the AWSVector.h file in the SDK:
`template< typename T > using Vector = std::vector< T, Aws::Allocator< T > >;`
In the example code, we define the Aws::* types.
If the compile-time constant is enabled (on), the type maps to a vector using custom memory allocation and the AWS memory system.
If the compile-time constant is disabled (off), the type maps to a regular std::vector with default type parameters.
Type aliasing is used for all std:: types in the SDK that perform memory allocation, such as containers, string stream, and string buf. The AWS SDK for C++ uses these types.
## Native SDK Developers and Memory Controls
Follow these rules in the SDK code:
* Do not use new and delete; use Aws::New<> and Aws::Delete<>.
* Do not use new[] and delete []; use Aws::NewArray<> and Aws::DeleteArray<>.
* Do not use std::make_shared; use Aws::MakeShared.
* Use Aws::UniquePtr for unique pointers to a single object. Use the Aws::MakeUnique function to create the unique pointer.
* Use Aws::UniqueArray for unique pointers to an array of objects. Use the Aws::MakeUniqueArray function to create the unique pointer.
* Do not directly use STL containers; use one of the Aws::typedefs or add a typedef for the desired container. Example: `Aws::Map<Aws::String, Aws::String> m_kvPairs;`
* Use shared_ptr for any external pointer passed into and managed by the SDK. You must initialize the shared pointer with a destruction policy that matches how the object was allocated. You can use a raw pointer if the SDK is not expected to clean up the pointer.
## Remaining Issues
You can control memory allocation in the SDK; however, STL types still dominate the public interface through string parameters to the model object initialize and set methods. If you choose not to use STL and use strings and containers instead, you must create a lot of temporaries whenever you want to make a service call.
To remove most of the temporaries and allocation when service calls are made using non-STL, we have implemented the following:
* Every Init/Set function that takes a string has an overload that takes the const char*.
* Every Init/Set function that takes a container (map/vector) has an add variant that takes a single entry.
* Every Init/Set function that takes binary data has an overload that takes a pointer to the data and a length value.
* (Optional) Every Init/Set function that takes a string has an overload that takes a non-zero terminated constr char* and a length value.

View File

@@ -0,0 +1,12 @@
# AWS SDK for C++ Documentation
Here you can find some helpful information on usage of the SDK.
* [Using the SDK](./SDK_usage_guide.md)
* [CMake Parameters](./CMake_Parameters.md)
* [Credentials Providers](./Credentials_Providers.md)
* [Client Configuration Parameters](./ClientConfiguration_Parameters.md)
* [Service Client](./Service_Client.md)
* [Memory Management](./Memory_Management.md)
* [Advanced Topics](./Advanced_topics.md)
* [Coding Standards](./CODING_STANDARDS.md)

View File

@@ -0,0 +1,79 @@
# Using the SDK
After they are constructed, individual service clients are very similar to other SDKs, such as Java and .NET. This section explains how core works, how to use each feature, and how to construct an individual client.
The aws-cpp-sdk-core is the heart of the system and does the heavy lifting. You can write a client to connect to any AWS service using just the core, and the individual service clients are available to help make the process a little easier.
## Running integration tests:
Several directories are appended with \*integration-tests. After building your project, you can run these executables to ensure everything works properly.
## Build Defines
If you dynamically link to the SDK you will need to define the USE_IMPORT_EXPORT symbol for all build targets using the SDK.
If you wish to install your own memory manager to handle allocations made by the SDK, you will need to pass the CUSTOM_MEMORY_MANAGEMENT cmake parameter (-DCUSTOM_MEMORY_MANAGEMENT) as well as define AWS_CUSTOM_MEMORY_MANAGEMENT in all build targets dependent on the SDK.
Note, if you use our export file, this will be handled automatically for you. We recommend you use our export file to handle this for you:
https://aws.amazon.com/blogs/developer/using-cmake-exports-with-the-aws-sdk-for-c/
## Initialization and Shutdown
We avoid global and static state where ever possible. However, on some platforms, dependencies need to be globally initialized. Also, we have a few global options such as
logging, memory management, http factories, and crypto factories. As a result, before using the SDK you MUST call our global initialization function. When you are finished using the SDK you should call our cleanup function.
All code using the AWS SDK and C++ should have at least the following:
```cpp
#include <aws/core/Aws.h>
...
Aws::SDKOptions options;
Aws::InitAPI(options);
//use the sdk
Aws::ShutdownAPI(options);
```
Due to the way memory managers work, many of the configuration options take closures instead of pointers directly in order to ensure that the memory manager
is installed prior to any memory allocations occuring.
Here are a few recipes:
* Just use defaults:
```cpp
Aws::SDKOptions options;
Aws::InitAPI(options);
.....
Aws::ShutdownAPI(options);
```
* Install custom memory manager:
```cpp
MyMemoryManager memoryManager;
Aws::SDKOptions options;
options.memoryManagementOptions.memoryManager = &memoryManager;
Aws::InitAPI(options);
.....
Aws::ShutdownAPI(options);
```
* Override default http client factory:
```cpp
Aws::SDKOptions options;
options.httpOptions.httpClientFactory_create_fn = [](){ return Aws::MakeShared<MyCustomHttpClientFactory>("ALLOC_TAG", arg1); };
Aws::InitAPI(options);
.....
Aws::ShutdownAPI(options);
```
## Logging
The AWS SDK for C++ includes logging support that you can configure. When initializing the logging system, you can control the filter level and the logging target (file with a name that has a configurable prefix or a stream). The log file generated by the prefix option rolls over once per hour to allow for archiving or deleting log files.
You can provide your own logger. However, it is incredibly simple to use the default logger we've already provided:
In your main function:
```cpp
SDKOptions options;
options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Info;
Aws::InitAPI(options);
//do SDK stuff;
Aws::ShutdownAPI(options);
```

View File

@@ -0,0 +1,27 @@
# Service Clients
You can use the default constructor, or you can use the system interfaces to construct a service client.
As an example, the following code creates an Amazon DynamoDB client using a specialized client configuration, default credentials provider chain, and default HTTP client factory:
```cpp
auto limiter = Aws::MakeShared<Aws::Utils::RateLimits::DefaultRateLimiter<>>(ALLOCATION_TAG, 200000);
// Create a client
ClientConfiguration config;
config.scheme = Scheme::HTTPS;
config.connectTimeoutMs = 30000;
config.requestTimeoutMs = 30000;
config.readRateLimiter = m_limiter;
config.writeRateLimiter = m_limiter;
auto client = Aws::MakeShared<DynamoDBClient>(ALLOCATION_TAG, config);
```
You can also do the following to manually pass credentials:
`auto client = Aws::MakeShared<DynamoDBClient>(ALLOCATION_TAG, AWSCredentials("access_key_id", "secret_key"), config);`
Or you can do the following to use a custom credentials provider:
`auto client = Aws::MakeShared<DynamoDBClient>(ALLOCATION_TAG, Aws::MakeShared<CognitoCachingAnonymousCredentialsProvider>(ALLOCATION_TAG, "identityPoolId", "accountId"), config);`
Now you can use your Amazon DynamoDB client.

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,65 @@
Apache License
Version 2.0, January 2004
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
1. You must give any other recipients of the Work or Derivative Works a copy of this License; and
2. You must cause any modified files to carry prominent notices stating that You changed the files; and
3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
4. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Note: Other license terms may apply to certain, identified software files contained within or distributed with the accompanying software if such terms are included in the directory containing the accompanying software. Such other license terms will then apply in lieu of the terms of the software license above.
JSON processing code subject to the MIT License from http://en.wikipedia.org/wiki/MIT_License
XML processing code is subject to the license at (www.grinninglizard.com)
Android build logic code is subject to the MIT License from http://en.wikipedia.org/wiki/MIT_License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
The Software shall be used for Good, not Evil.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,168 @@
# CMAKE generated file: DO NOT EDIT!
# Generated by "Unix Makefiles" Generator, CMake Version 3.17
# Default target executed when no arguments are given to make.
default_target: all
.PHONY : default_target
# Allow only one "make -f Makefile2" at a time, but pass parallelism.
.NOTPARALLEL:
#=============================================================================
# Special targets provided by cmake.
# Disable implicit rules so canonical targets will work.
.SUFFIXES:
# Disable VCS-based implicit rules.
% : %,v
# Disable VCS-based implicit rules.
% : RCS/%
# Disable VCS-based implicit rules.
% : RCS/%,v
# Disable VCS-based implicit rules.
% : SCCS/s.%
# Disable VCS-based implicit rules.
% : s.%
.SUFFIXES: .hpux_make_needs_suffix_list
# Command-line flag to silence nested $(MAKE).
$(VERBOSE)MAKESILENT = -s
# Suppress display of executed commands.
$(VERBOSE).SILENT:
# A target that is always out of date.
cmake_force:
.PHONY : cmake_force
#=============================================================================
# Set environment variables for the build.
# The shell in which to execute make rules.
SHELL = /bin/sh
# The CMake executable.
CMAKE_COMMAND = /usr/bin/cmake3
# The command to remove a file.
RM = /usr/bin/cmake3 -E rm -f
# Escaping for special characters.
EQUALS = =
# The top-level source directory on which CMake was run.
CMAKE_SOURCE_DIR = /home/pxz/project/hos_client_cpp_module/support
# The top-level build directory on which CMake was run.
CMAKE_BINARY_DIR = /home/pxz/project/hos_client_cpp_module/support/aws-sdk-cpp-master
#=============================================================================
# Targets provided globally by CMake.
# Special rule for the target rebuild_cache
rebuild_cache:
@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Running CMake to regenerate build system..."
/usr/bin/cmake3 --regenerate-during-build -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR)
.PHONY : rebuild_cache
# Special rule for the target rebuild_cache
rebuild_cache/fast: rebuild_cache
.PHONY : rebuild_cache/fast
# Special rule for the target edit_cache
edit_cache:
@$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Running CMake cache editor..."
/usr/bin/ccmake3 -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR)
.PHONY : edit_cache
# Special rule for the target edit_cache
edit_cache/fast: edit_cache
.PHONY : edit_cache/fast
# The main all target
all: cmake_check_build_system
$(CMAKE_COMMAND) -E cmake_progress_start /home/pxz/project/hos_client_cpp_module/support/aws-sdk-cpp-master/CMakeFiles /home/pxz/project/hos_client_cpp_module/support/aws-sdk-cpp-master/CMakeFiles/progress.marks
$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 all
$(CMAKE_COMMAND) -E cmake_progress_start /home/pxz/project/hos_client_cpp_module/support/aws-sdk-cpp-master/CMakeFiles 0
.PHONY : all
# The main clean target
clean:
$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 clean
.PHONY : clean
# The main clean target
clean/fast: clean
.PHONY : clean/fast
# Prepare targets for installation.
preinstall: all
$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 preinstall
.PHONY : preinstall
# Prepare targets for installation.
preinstall/fast:
$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 preinstall
.PHONY : preinstall/fast
# clear depends
depend:
$(CMAKE_COMMAND) -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) --check-build-system CMakeFiles/Makefile.cmake 1
.PHONY : depend
#=============================================================================
# Target rules for targets named aws-sdk-cpp-master
# Build rule for target.
aws-sdk-cpp-master: cmake_check_build_system
$(MAKE) $(MAKESILENT) -f CMakeFiles/Makefile2 aws-sdk-cpp-master
.PHONY : aws-sdk-cpp-master
# fast build rule for target.
aws-sdk-cpp-master/fast:
$(MAKE) $(MAKESILENT) -f CMakeFiles/aws-sdk-cpp-master.dir/build.make CMakeFiles/aws-sdk-cpp-master.dir/build
.PHONY : aws-sdk-cpp-master/fast
# Help Target
help:
@echo "The following are some of the valid targets for this Makefile:"
@echo "... all (the default if no target is provided)"
@echo "... clean"
@echo "... depend"
@echo "... edit_cache"
@echo "... rebuild_cache"
@echo "... aws-sdk-cpp-master"
.PHONY : help
#=============================================================================
# Special targets to cleanup operation of make.
# Special rule to run CMake to check the build system integrity.
# No rule that depends on this can have commands that come from listfiles
# because they might be regenerated.
cmake_check_build_system:
$(CMAKE_COMMAND) -S$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) --check-build-system CMakeFiles/Makefile.cmake 0
.PHONY : cmake_check_build_system

View File

@@ -0,0 +1,16 @@
AWS SDK for C++
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
This product includes software developed by
Amazon Technologies, Inc (http://www.amazon.com/).
**********************
THIRD PARTY COMPONENTS
**********************
This software includes third party software subject to the following copyrights:
- XML parsing and utility functions from TinyXml2 - Lee Thomason.
- JSON parsing and utility functions from JsonCpp - Copyright (c) 2007-2010 Baptiste Lepilleur.
- OpenSSL build files for cmake used for Android Builds - Copyright (C) 2007-2012 LuaDist and Copyright (C) 2013 Brian Sidebotham
- Android tool chain cmake build files - Copyright (c) 2010-2011, Ethan Rublee and Copyright (c) 2011-2014, Andrey Kamaev
The licenses for these third party components are included in LICENSE.txt

View File

@@ -0,0 +1,162 @@
# AWS SDK for C++
The AWS SDK for C++ provides a modern C++ (version C++ 11 or later) interface for Amazon Web Services (AWS). It is meant to be performant and fully functioning with low- and high-level SDKs, while minimizing dependencies and providing platform portability (Windows, OSX, Linux, and mobile).
AWS SDK for C++ is in now in General Availability and recommended for production use. We invite our customers to join
the development efforts by submitting pull requests and sending us feedback and ideas via GitHub Issues.
## Version 1.8 is now Available!
Version 1.8 introduces much asked for new features and changes to the SDK but, because this might also cause compatibility issues with previous versions we've decided to keep it as a seperate branch to make the transition less jarring.
For more information see the [Whats New in AWS SDK for CPP Version 1.8](https://github.com/aws/aws-sdk-cpp/wiki/What%E2%80%99s-New-in-AWS-SDK-for-CPP-Version-1.8) entry of the wiki, and also please provide any feedback you may have of these changes on our pinned [issue](https://github.com/aws/aws-sdk-cpp/issues/1373).
__Jump To:__
* [Getting Started](#Getting-Started)
* [Issues and Contributions](#issues-and-contributions)
* [Getting Help](#Getting-Help)
* [Using the SDK and Other Topics](#Using-the-SDK-and-Other-Topics)
# Getting Started
## Building the SDK:
### Minimum Requirements:
* Visual Studio 2015 or later
* OR GNU Compiler Collection (GCC) 4.9 or later
* OR Clang 3.3 or later
* 4GB of RAM
* 4GB of RAM is required to build some of the larger clients. The SDK build may fail on EC2 instance types t2.micro, t2.small and other small instance types due to insufficient memory.
### Building From Source:
#### To create an **out-of-source build**:
1. Install CMake and the relevant build tools for your platform. Ensure these are available in your executable path.
2. Create your build directory. Replace <BUILD_DIR> with your build directory name:
3. Build the project:
* For Auto Make build systems:
```sh
cd <BUILD_DIR>
cmake <path-to-root-of-this-source-code> -DCMAKE_BUILD_TYPE=Debug
make
sudo make install
```
* For Visual Studio:
```sh
cd <BUILD_DIR>
cmake <path-to-root-of-this-source-code> -G "Visual Studio 15 Win64" -DCMAKE_BUILD_TYPE=Debug
msbuild ALL_BUILD.vcxproj /p:Configuration=Debug
```
* For macOS - Xcode:
```sh
cmake <path-to-root-of-this-source-code> -G Xcode -DTARGET_ARCH="APPLE" -DCMAKE_BUILD_TYPE=Debug
xcodebuild -target ALL_BUILD
```
### Third party dependencies:
Starting from version 1.7.0, we added several third party dependencies, including [`aws-c-common`](https://github.com/awslabs/aws-c-common), [`aws-checksums`](https://github.com/awslabs/aws-checksums) and [`aws-c-event-stream`](https://github.com/awslabs/aws-c-event-stream). By default, they will be built and installed in `<BUILD_DIR>/.deps/install`, and copied to default system directory during SDK installation. You can change the location by specifying `CMAKE_INSTALL_PREFIX`.
However, if you want to build and install these libraries in custom locations:
1. Download, build and install `aws-c-common`:
```sh
git clone https://github.com/awslabs/aws-c-common
cd aws-c-common
# checkout to a specific commit id if you want.
git checkout <commit-id>
mkdir build && cd build
# without CMAKE_INSTALL_PREFIX, it will be installed to default system directory.
cmake .. -DCMAKE_INSTALL_PREFIX=<deps-install-dir> <extra-cmake-parameters-here>
make # or MSBuild ALL_BUILD.vcxproj on Windows
make install # or MSBuild INSTALL.vcxproj on Windows
```
2. Download, build and install `aws-checksums`:
```sh
git clone https://github.com/awslabs/aws-checksums
cd aws-checksums
# checkout to a specific commit id if you want
git checkout <commit-id>
mkdir build && cd build
# without CMAKE_INSTALL_PREFIX, it will be installed to default system directory.
cmake .. -DCMAKE_INSTALL_PREFIX=<deps-install-dir> <extra-cmake-parameters-here>
make # or MSBuild ALL_BUILD.vcxproj on Windows
make install # or MSBuild INSTALL.vcxproj on Windows
```
3. Download, build and install `aws-c-event-stream`:
```sh
git clone https://github.com/awslabs/aws-c-event-stream
cd aws-c-event-stream
# checkout to a specific commit id if you want
git checkout <commit-id>
mkdir build && cd build
# aws-c-common and aws-checksums are dependencies of aws-c-event-stream
# without CMAKE_INSTALL_PREFIX, it will be installed to default system directory.
cmake .. -DCMAKE_INSTALL_PREFIX=<deps-install-dir> -DCMAKE_PREFIX_PATH=<deps-install-dir> <extra-cmake-parameters-here>
make # or MSBuild ALL_BUILD.vcxproj on Windows
make install # or MSBuild INSTALL.vcxproj on Windows
```
4. Turn off `BUILD_DEPS` when building C++ SDK:
```sh
cd BUILD_DIR
cmake <path-to-root-of-this-source-code> -DBUILD_DEPS=OFF -DCMAKE_PREFIX_PATH=<deps-install-dir>
```
You may also find the following link helpful for including the build in your project:
https://aws.amazon.com/blogs/developer/using-cmake-exports-with-the-aws-sdk-for-c/
#### Other Dependencies:
To compile in Linux, you must have the header files for libcurl, libopenssl. The packages are typically available in your package manager.
Debian example:
`sudo apt-get install libcurl-dev`
### Building for Android
To build for Android, add `-DTARGET_ARCH=ANDROID` to your cmake command line. Currently we support Android APIs from 19 to 28 with Android NDK 19c and we are using build-in cmake toolchain file supplied by Android NDK, assuming you have the appropriate environment variables (ANDROID_NDK) set.
##### Android on Windows
Building for Android on Windows requires some additional setup. In particular, you will need to run cmake from a Visual Studio developer command prompt (2015 or higher). Additionally, you will need 'git' and 'patch' in your path. If you have git installed on a Windows system, then patch is likely found in a sibling directory (.../Git/usr/bin/). Once you've verified these requirements, your cmake command line will change slightly to use nmake:
```sh
cmake -G "NMake Makefiles" `-DTARGET_ARCH=ANDROID` <other options> ..
```
Nmake builds targets in a serial fashion. To make things quicker, we recommend installing JOM as an alternative to nmake and then changing the cmake invocation to:
```sh
cmake -G "NMake Makefiles JOM" `-DTARGET_ARCH=ANDROID` <other options> ..
```
### Building for Docker
To build for Docker, ensure your container meets the [minimum requirements](#minimum-requirements). By default, Docker Desktop is set to use 2 GB runtime memory. We have provided [Dockerfiles](https://github.com/aws/aws-sdk-cpp/tree/master/CI/docker-file) as templates for building the SDK in a container.
### Building and running an app on EC2
Checkout this walkthrough on how to set up an enviroment and build the [AWS SDK for C++ on an EC2 instance](https://github.com/aws/aws-sdk-cpp/wiki/Building-the-SDK-from-source-on-EC2).
# Issues and Contributions
We welcome all kinds of contributions, check [this guideline](./CONTRIBUTING.md) to learn how you can contribute or report issues.
# Getting Help
The best way to interact with our team is through GitHub. You can [open an issue](https://github.com/aws/aws-sdk-cpp/issues/new/choose) and choose from one of our templates for guidance, bug reports, or feature requests. You may also find help on community resources such as [StackOverFlow](https://stackoverflow.com/questions/tagged/aws-sdk-cpp) with the tag #aws-sdk-cpp or If you have a support plan with [AWS Support](https://aws.amazon.com/premiumsupport/), you can also create a new support case.
Please make sure to check out our resources too before opening an issue:
* Our [Developer Guide](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/welcome.html) and [API reference](http://sdk.amazonaws.com/cpp/api/LATEST/index.html)
* Our [Changelog](./CHANGELOG.md) for recent breaking changes.
* Our [Contribute](./CONTRIBUTING.md) guide.
* Our [samples repo](https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/cpp).
# Using the SDK and Other Topics
* [Using the SDK](./Docs/SDK_usage_guide.md)
* [CMake Parameters](./Docs/CMake_Parameters.md)
* [Credentials Providers](./Docs/Credentials_Providers.md)
* [Client Configuration Parameters](./Docs/ClientConfiguration_Parameters.md)
* [Service Client](./Docs/Service_Client.md)
* [Memory Management](./Docs/Memory_Management.md)
* [Advanced Topics](./Docs/Advanced_topics.md)
* [Coding Standards](./Docs/CODING_STANDARDS.md)
* [License](./LICENSE)
* [Code of Conduct](./CODE_OF_CONDUCT.md)

View File

@@ -0,0 +1,4 @@
# exceptions due to naming conflicts between our external projects (curl/openssl) and implementations that use those libraries
!patches/zlib
!patches/curl

View File

@@ -0,0 +1,12 @@
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0.
#
SET( HAVE_GLIBC_STRERROR_R 1 CACHE STRING "Result from TRY_RUN" FORCE)
SET( HAVE_GLIBC_STRERROR_R__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE)
SET( HAVE_POSIX_STRERROR_R 0 CACHE STRING "Result from TRY_RUN" FORCE)
SET( HAVE_POSIX_STRERROR_R__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE)
SET( HAVE_POLL_FINE_EXITCODE 0 CACHE STRING "Result from TRY_RUN" FORCE )
SET( HAVE_POLL_FINE_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE)
SET( OPENSSL_CRYPTO_LIBRARY crypto CACHE STRING "Set crypto" FORCE )
SET( OPENSSL_SSL_LIBRARY ssl CACHE STRING "Set ssl" FORCE )

View File

@@ -0,0 +1,446 @@
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0.
#
import re
import os
import argparse
import subprocess
import shutil
import time
import datetime
import sys
TestName = "AndroidSDKTesting"
TestLowerName = TestName.lower()
def ArgumentException( Exception ):
def __init__( self, argumentName, argumentValue ):
self.m_argumentName = argumentName
self.m_argumentValue = argumentValue
def ParseArguments():
parser = argparse.ArgumentParser(description="AWSNativeSDK Android Test Script")
parser.add_argument("--clean", action="store_true")
parser.add_argument("--emu", action="store_true")
parser.add_argument("--abi", action="store")
parser.add_argument("--avd", action="store")
parser.add_argument("--nobuild", action="store_true")
parser.add_argument("--noinstall", action="store_true")
parser.add_argument("--runtest", action="store")
parser.add_argument("--credentials", action="store")
parser.add_argument("--build", action="store")
parser.add_argument("--so", action="store_true")
parser.add_argument("--stl", action="store")
args = vars( parser.parse_args() )
argMap = {}
argMap[ "clean" ] = args[ "clean" ]
argMap[ "abi" ] = args[ "abi" ] or "armeabi-v7a"
argMap[ "avd" ] = args[ "avd" ]
argMap[ "useExistingEmulator" ] = args[ "emu" ]
argMap[ "noBuild" ] = args[ "nobuild" ]
argMap[ "noInstall" ] = args[ "noinstall" ]
argMap[ "credentialsFile" ] = args[ "credentials" ] or "~/.aws/credentials"
argMap[ "buildType" ] = args[ "build" ] or "Release"
argMap[ "runTest" ] = args[ "runtest" ]
argMap[ "so" ] = args[ "so" ]
argMap[ "stl" ] = args[ "stl" ] or "libc++_shared"
return argMap
def IsValidABI(abi):
return abi == "armeabi-v7a"
def ShouldBuildClean(abi, buildDir):
if not os.path.exists( buildDir ):
return True
abiPattern = re.compile("ANDROID_ABI:STRING=\s*(?P<abi>\S+)")
for _, line in enumerate(open(buildDir + "/CMakeCache.txt")):
result = abiPattern.search(line)
if result != None:
return result.group("abi") != abi
return False
def BuildAvdAbiSet():
namePattern = re.compile("Name:\s*(?P<name>\S+)")
abiPattern = re.compile("ABI: default/(?P<abi>\S+)")
avdList = subprocess.check_output(["android", "list", "avds"])
avdABIs = {}
currentName = None
for _, line in enumerate(avdList.splitlines()):
if not currentName:
nameResult = namePattern.search(line)
if nameResult != None:
currentName = nameResult.group("name")
else:
abiResult = abiPattern.search(line)
if abiResult != None:
avdABIs[currentName] = abiResult.group("abi")
currentName = None
return avdABIs
def DoesAVDSupportABI(avdAbi, abi):
if avdAbi == "armeabi-v7a":
return abi == "armeabi-v7a" or abi == "armeabi"
else:
return abi == avdAbi
def FindAVDForABI(abi, avdABIs):
for avdName in avdABIs:
if DoesAVDSupportABI(avdABIs[avdName], abi):
return avdName
return None
def IsValidAVD(avd, abi, avdABIs):
return DoesAVDSupportABI(avdABIs[avd], abi)
def GetTestList(buildSharedObjects):
if buildSharedObjects:
return [ 'core', 's3', 'dynamodb', 'cloudfront', 'cognitoidentity', 'identity', 'lambda', 'logging', 'redshift', 'sqs', 'transfer' ]
else:
return [ 'unified' ]
def ValidateArguments(buildDir, avd, abi, clean, runTest, buildSharedObjects):
validTests = GetTestList( buildSharedObjects )
if runTest not in validTests:
print( 'Invalid value for runtest option: ' + runTest )
print( 'Valid values are: ' )
print( ' ' + ", ".join( validTests ) )
raise ArgumentException('runtest', runTest)
if not IsValidABI(abi):
print('Invalid argument value for abi: ', abi)
print(' Valid values are "armeabi-v7a"')
raise ArgumentException('abi', abi)
if not clean and ShouldBuildClean(abi, buildDir):
clean = True
avdABIs = BuildAvdAbiSet()
if not avd:
print('No virtual device specified (--avd), trying to find one in the existing avd set...')
avd = FindAVDForABI(abi, avdABIs)
if not IsValidAVD(avd, abi, avdABIs):
print('Invalid virtual device: ', avd)
print(' Use --avd to set the virtual device')
print(' Use "android lists avds" to see all usable virtual devices')
raise ArgumentException('avd', avd)
return (avd, abi, clean)
def SetupJniDirectory(abi, clean):
path = os.path.join( TestName, "app", "src", "main", "jniLibs", abi )
if clean and os.path.exists(path):
shutil.rmtree(path)
if os.path.exists( path ) == False:
os.makedirs( path )
return path
def CopyNativeLibraries(buildSharedObjects, jniDir, buildDir, abi, stl):
baseToolchainDir = os.path.join(buildDir, 'toolchains', 'android')
toolchainDirList = os.listdir(baseToolchainDir) # should only be one entry
toolchainDir = os.path.join(baseToolchainDir, toolchainDirList[0])
platformLibDir = os.path.join(toolchainDir, "sysroot", "usr", "lib")
shutil.copy(os.path.join(platformLibDir, "liblog.so"), jniDir)
stdLibDir = os.path.join(toolchainDir, 'arm-linux-androideabi', 'lib')
if stl == 'libc++_shared':
shutil.copy(os.path.join(stdLibDir, "libc++_shared.so"), jniDir)
elif stl == 'gnustl_shared':
shutil.copy(os.path.join(stdLibDir, "armv7-a", "libgnustl_shared.so"), jniDir) # TODO: remove armv7-a hardcoded path
if buildSharedObjects:
soPattern = re.compile(".*\.so$")
for rootDir, dirNames, fileNames in os.walk( buildDir ):
for fileName in fileNames:
if soPattern.search(fileName):
libFileName = os.path.join(rootDir, fileName)
shutil.copy(libFileName, jniDir)
else:
unifiedTestsLibrary = os.path.join(buildDir, "android-unified-tests", "libandroid-unified-tests.so")
shutil.copy(unifiedTestsLibrary, jniDir)
def RemoveTree(dir):
if os.path.exists( dir ):
shutil.rmtree( dir )
def BuildNative(abi, clean, buildDir, jniDir, installDir, buildType, buildSharedObjects, stl):
if clean:
RemoveTree(installDir)
RemoveTree(buildDir)
RemoveTree(jniDir)
for externalProjectDir in [ "openssl", "zlib", "curl" ]:
RemoveTree(externalProjectDir)
os.makedirs( jniDir )
os.makedirs( buildDir )
os.chdir( buildDir )
if not buildSharedObjects:
link_type_line = "-DBUILD_SHARED_LIBS=OFF"
else:
link_type_line = "-DBUILD_SHARED_LIBS=ON"
subprocess.check_call( [ "cmake",
link_type_line,
"-DCUSTOM_MEMORY_MANAGEMENT=ON",
"-DTARGET_ARCH=ANDROID",
"-DANDROID_ABI=" + abi,
"-DANDROID_STL=" + stl,
"-DCMAKE_BUILD_TYPE=" + buildType,
"-DENABLE_UNITY_BUILD=ON",
'-DTEST_CERT_PATH="/data/data/aws.' + TestLowerName + '/certs"',
'-DBUILD_ONLY=dynamodb;sqs;s3;lambda;kinesis;cognito-identity;transfer;iam;identity-management;access-management;s3-encryption',
".."] )
else:
os.chdir( buildDir )
if buildSharedObjects:
subprocess.check_call( [ "make", "-j12" ] )
else:
subprocess.check_call( [ "make", "-j12", "android-unified-tests" ] )
os.chdir( ".." )
CopyNativeLibraries(buildSharedObjects, jniDir, buildDir, abi, stl)
def BuildJava(clean):
os.chdir( TestName )
if clean:
subprocess.check_call( [ "./gradlew", "clean" ] )
subprocess.check_call( [ "./gradlew", "--refresh-dependencies" ] )
subprocess.check_call( [ "./gradlew", "assembleDebug" ] )
os.chdir( ".." )
def IsAnEmulatorRunning():
emulatorPattern = re.compile("(?P<emu>emulator-\d+)")
emulatorList = subprocess.check_output(["adb", "devices"])
for _, line in enumerate(emulatorList.splitlines()):
result = emulatorPattern.search(line)
if result:
return True
return False
def KillRunningEmulators():
emulatorPattern = re.compile("(?P<emu>emulator-\d+)")
emulatorList = subprocess.check_output(["adb", "devices"])
for _, line in enumerate(emulatorList.splitlines()):
result = emulatorPattern.search(line)
if result:
emulatorName = result.group( "emu" )
subprocess.check_call( [ "adb", "-s", emulatorName, "emu", "kill" ] )
def WaitForEmulatorToBoot():
time.sleep(5)
subprocess.check_call( [ "adb", "-e", "wait-for-device" ] )
print( "Device online; booting..." )
bootCompleted = False
bootAnimPlaying = True
while not bootCompleted or bootAnimPlaying:
time.sleep(1)
bootCompleted = subprocess.check_output( [ "adb", "-e", "shell", "getprop sys.boot_completed" ] ).strip() == "1"
bootAnimPlaying = subprocess.check_output( [ "adb", "-e", "shell", "getprop init.svc.bootanim" ] ).strip() != "stopped"
print( "Device booted" )
def InitializeEmulator(avd, useExistingEmu):
if not useExistingEmu:
KillRunningEmulators()
if not IsAnEmulatorRunning():
# this may not work on windows due to the shell and &
subprocess.Popen( "emulator -avd " + avd + " -gpu off &", shell=True ).communicate()
WaitForEmulatorToBoot()
#TEMPORARY: once we have android CI, we will adjust the emulator's CA set as a one-time step and then remove this step
def BuildAndInstallCertSet(pemSourceDir, buildDir):
# android's default cert set does not allow verification of Amazon's cert chain, so we build, install, and use our own set that works
certDir = os.path.join( buildDir, "certs" )
pemSourceFile = os.path.join( pemSourceDir, "cacert.pem" )
# assume that if the directory exists, then the cert set is valid and we just need to upload
if not os.path.exists( certDir ):
os.makedirs( certDir )
# extract all the certs in curl's master cacert.pem file out into individual .pem files
subprocess.check_call( "cat " + pemSourceFile + " | awk '{print > \"" + certDir + "/cert\" (1+n) \".pem\"} /-----END CERTIFICATE-----/ {n++}'", shell = True )
# use openssl to transform the certs into the hashname form that curl/openssl expects
subprocess.check_call( "c_rehash certs", shell = True, cwd = buildDir )
# The root (VeriSign 3) cert in Amazon's chain is missing from curl's master cacert.pem file and needs to be copied manually
shutil.copy(os.path.join( pemSourceDir, "certs", "415660c1.0" ), certDir)
shutil.copy(os.path.join( pemSourceDir, "certs", "7651b327.0" ), certDir)
subprocess.check_call( [ "adb", "shell", "rm -rf /data/data/aws." + TestLowerName + "/certs" ] )
subprocess.check_call( [ "adb", "shell", "mkdir /data/data/aws." + TestLowerName + "/certs" ] )
# upload all the hashed certs to the emulator
certPattern = re.compile(".*\.0$")
for rootDir, dirNames, fileNames in os.walk( certDir ):
for fileName in fileNames:
if certPattern.search(fileName):
certFileName = os.path.join(rootDir, fileName)
subprocess.check_call( [ "adb", "push", certFileName, "/data/data/aws." + TestLowerName + "/certs" ] )
def UploadTestResources(resourcesDir):
for rootDir, dirNames, fileNames in os.walk( resourcesDir ):
for fileName in fileNames:
resourceFileName = os.path.join( rootDir, fileName )
subprocess.check_call( [ "adb", "push", resourceFileName, os.path.join( "/data/data/aws." + TestLowerName + "/resources", fileName ) ] )
def UploadAwsSigV4TestSuite(resourceDir):
for rootDir, dirNames, fileNames in os.walk( resourceDir ):
for fileName in fileNames:
resourceFileName = os.path.join( rootDir, fileName )
subDir = os.path.basename( rootDir )
subprocess.check_call( [ "adb", "push", resourceFileName, os.path.join( "/data/data/aws." + TestLowerName + "/resources", subDir, fileName ) ] )
def InstallTests(credentialsFile):
subprocess.check_call( [ "adb", "install", "-r", TestName + "/app/build/outputs/apk/app-debug.apk" ] )
subprocess.check_call( [ "adb", "logcat", "-c" ] ) # this doesn't seem to work
if credentialsFile and credentialsFile != "":
print( "uploading credentials" )
subprocess.check_call( [ "adb", "push", credentialsFile, "/data/data/aws." + TestLowerName + "/.aws/credentials" ] )
def TestsAreRunning(timeStart):
shutdownCalledOutput = subprocess.check_output( "adb logcat -t " + timeStart + " *:V | grep \"Shutting down TestActivity\"; exit 0 ", shell = True )
return not shutdownCalledOutput
def RunTest(testName):
time.sleep(5)
print( "Attempting to unlock..." )
subprocess.check_call( [ "adb", "-e", "shell", "input keyevent 82" ] )
logTime = datetime.datetime.now() + datetime.timedelta(minutes=-1) # the emulator and the computer do not appear to be in perfect sync
logTimeString = logTime.strftime("\"%m-%d %H:%M:%S.000\"")
time.sleep(5)
print( "Attempting to run tests..." )
subprocess.check_call( [ "adb", "shell", "am start -e test " + testName + " -n aws." + TestLowerName + "/aws." + TestLowerName + ".RunSDKTests" ] )
time.sleep(10)
while TestsAreRunning(logTimeString):
print( "Tests still running..." )
time.sleep(5)
print( "Saving logs..." )
subprocess.Popen( "adb logcat -t " + logTimeString + " *:V | grep -a NativeSDK > AndroidTestOutput.txt", shell=True )
print( "Cleaning up..." )
subprocess.check_call( [ "adb", "shell", "pm clear aws." + TestLowerName ] )
def DidAllTestsSucceed():
failures = subprocess.check_output( "grep \"FAILED\" AndroidTestOutput.txt ; exit 0", shell = True )
return failures == ""
def Main():
args = ParseArguments()
avd = args[ "avd" ]
abi = args[ "abi" ]
clean = args[ "clean" ]
useExistingEmu = args[ "useExistingEmulator" ]
skipBuild = args[ "noBuild" ]
credentialsFile = args[ "credentialsFile" ]
buildType = args[ "buildType" ]
noInstall = args[ "noInstall" ]
buildSharedObjects = args[ "so" ]
runTest = args[ "runTest" ]
stl = args[ "stl" ]
buildDir = "_build" + buildType
installDir = os.path.join( "external", abi );
if runTest:
avd, abi, clean = ValidateArguments(buildDir, avd, abi, clean, runTest, buildSharedObjects)
jniDir = SetupJniDirectory(abi, clean)
if not skipBuild:
BuildNative(abi, clean, buildDir, jniDir, installDir, buildType, buildSharedObjects, stl)
BuildJava(clean)
if not runTest:
return 0
print("Starting emulator...")
InitializeEmulator(avd, useExistingEmu)
if not noInstall:
print("Installing tests...")
InstallTests(credentialsFile)
print("Installing certs...")
BuildAndInstallCertSet("android-build", buildDir)
print("Uploading test resources")
UploadTestResources("aws-cpp-sdk-lambda-integration-tests/resources")
print("Uploading SigV4 test files")
UploadAwsSigV4TestSuite(os.path.join("aws-cpp-sdk-core-tests", "resources", "aws4_testsuite", "aws4_testsuite"))
print("Running tests...")
RunTest( runTest )
if not useExistingEmu:
KillRunningEmulators()
if DidAllTestsSucceed():
print( "All tests passed!" )
return 0
else:
print( "Some tests failed. See AndroidTestOutput.txt" )
return 1
Main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
-----BEGIN CERTIFICATE-----
MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkG
A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz
cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2
MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV
BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt
YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN
ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE
BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is
I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G
CSqGSIb3DQEBAgUAA4GBALtMEivPLCYATxQT3ab7/AoRhIzzKBxnki98tsX63/Do
lbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59AhWM1pF+NEHJwZRDmJXNyc
AA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2OmufTqj/ZA1k
-----END CERTIFICATE-----

View File

@@ -0,0 +1,14 @@
-----BEGIN CERTIFICATE-----
MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkG
A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz
cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2
MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV
BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt
YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN
ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE
BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is
I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G
CSqGSIb3DQEBAgUAA4GBALtMEivPLCYATxQT3ab7/AoRhIzzKBxnki98tsX63/Do
lbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59AhWM1pF+NEHJwZRDmJXNyc
AA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2OmufTqj/ZA1k
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More