OWAS SDK integration
Add maven repository
In the settings.gradle
add the following to repositories
maven {
url "https://privately.jfrog.io/artifactory/margin-libs/"
}
maven {
url "https://privately.jfrog.io/artifactory/owas-libs/"
}
Add SDK dependencies
Add the following to any module using the SDK
implementation 'privately-sdk:core:1.0.1'
implementation 'owas-sdk:owas-nlp:1.0.2'
implementation 'org.tensorflow:tensorflow-lite:2.5.0'
You're good to go!
Perform some text analyses
Make sure you have a set of API key and secret
If you haven't created an account yet, visit https://developer.privately.eu/contact-us to register for a free trial.
Authenticate the core SDK
PrivatelyCore.INSTANCE.init(this);
if (!PrivatelyCore.INSTANCE.isAuthenticated()) {
PrivatelyCore.INSTANCE.authenticate("API key", "secret key",
new PrivatelyCore.AuthenticationCallback() {
@Override
public void onSuccess() {
Log.e(TAG, "Authenticated successfully");
}
@Override
public void onError(String error) {
Log.e(TAG, "Error authenticating: " + error);
}
});
}
Analyse a text for hate speech
Hate speech analysis is done through the HateSpeechAnalyzer
enum. First of all make sure the HateSpeechAnalyzer
is initialised.
HateSpeechAnalyzer.INSTANCE.init(context);
To find out if a text contains hate speech, use the following call:
boolean containsHateSpeech = HateSpeechAnalyzer.INSTANCE.containsHateSpeech(someText);
You can also get the underlying hate speech score (between 0 and 1) associated with the text by using:
float hateSpeechScore = HateSpeechAnalyzer.INSTANCE.analyzeText(someText);
Analyse a text for toxicity
Toxicity analysis is done through the ToxicityAnalyzer
enum. First of all make sure the ToxicityAnalyzer
is initialised.
ToxicityAnalyzer.INSTANCE.initManager(context);
To find out if a text contains toxicity, use the following call:
boolean containsToxicity = ToxicityAnalyzer.INSTANCE.containsToxicity(someText);
You can also get the underlying toxicity score (between 0 and 1) associated with the text by using:
float toxicityScore = ToxicityAnalyzer.INSTANCE.analyzeText(someText);
Analyse a text for profanity
Profanity analysis is done through the ProfanityAnalyzer
enum. First of all make sure the ProfanityAnalyzer
is initialised.
ProfanityAnalyzer.INSTANCE.init(context);
To find out if a text contains profanity, use the following call:
boolean containsProfanity = ProfanityAnalyzer.INSTANCE.containsProfanity(someText);
Analyse a text for sensitive information
Sensitive information analysis is done through the SensitiveInfoAnalyzer
enum. First of all make sure the SensitiveInfoAnalyzer
is initialised.
SensitiveInfoAnalyzer.INSTANCE.init();
To find out if a text contains sensitive information (UK address, UK phone number, or email address), use the following call:
SensitiveInfoResult sensitiveInfoResult =
SensitiveInfoAnalyzer.INSTANCE.analyzeText(someText);
You can then query SensitiveInfoResult
with containsUkPostcode()
, containsUkPhoneNumber()
, and containsEmailAddress()
. You can also use the contains(SensitiveInfoMatch sensitiveInfo)
method or containsAll(Collection<SensitiveInfoMatch> sensitiveInfo)
, and even use the asSet()
method for more flexibility.