Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
266 changes: 266 additions & 0 deletions docs/en/solutions/How_to_Use_Konveyor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,266 @@
---
kind:
- Solution
products:
- Alauda Application Services
ProductsVersion:
- 4.x
---

# How to Deploy and Use Konveyor

## Overview

Konveyor is a CNCF (Cloud Native Computing Foundation) project that provides a modular platform for application modernization. It supports the entire lifecycle of modernization: discovery, assessment, analysis, and execution. This guide covers deploying the Konveyor Hub (Tackle) platform and its core components.

## Prerequisites

- Kubernetes cluster with kubectl access
- StorageClass that supports ReadWriteMany (RWX) access mode
- StorageClass for RWO volumes (for databases)
- (Optional) LoadBalancer or Ingress Controller for external access

## Install Konveyor Operator

Download the Konveyor Operator plugin from [Alauda Cloud Console](https://cloud.alauda.io/) Marketplace, and follow the [Upload Packages](https://docs.alauda.io/container_platform/4.2/extend/upload_package.html) guide to upload the plugin to the cluster.

## Deploy Konveyor Hub (Tackle)

### Create Tackle Instance

Deploy the Tackle platform by creating a Tackle CR. The Tackle instance must be deployed in the same namespace as the konveyor-operator.

```yaml
cat << EOF | kubectl create -f -
apiVersion: tackle.konveyor.io/v1alpha1
kind: Tackle
metadata:
name: tackle
namespace: konveyor-tackle
spec:
feature_auth_required: true
feature_isolate_namespace: true
feature_analysis_archiver: true
hub_database_volume_size: 5Gi
hub_bucket_volume_size: 100Gi
rwx_supported: true
hub_bucket_storage_class: nfs # Replace with your RWX StorageClass
rwo_storage_class: sc-topolvm # Replace with your RWO StorageClass
cache_storage_class: nfs
cache_data_volume_size: 100Gi
EOF
```

### Verify Deployment

Check the status of the pods in the `konveyor-tackle` namespace:

```bash
kubectl get pods -n konveyor-tackle
```

Ensure all pods are in `Running` or `Completed` state before proceeding.

> [!WARNING]
> The Tackle instance must be deployed in the same namespace as the `konveyor-operator`. If you deploy it in a different namespace, some resources created by the operator (such as PersistentVolumeClaims, ConfigMaps, Secrets, and ServiceAccounts) might not be automatically deleted when the Tackle custom resource is removed. In that case, you must manually clean up these resources in the affected namespaces, for example:
>
> ```bash
> # Delete common resources labeled for the Tackle instance
> kubectl delete pvc,configmap,secret,sa -l app.kubernetes.io/instance=tackle -n konveyor-tackle
> ```

### Configuration Options

| Name | Default | Description |
| --- | --- | --- |
| `spec.feature_auth_required` | `true` | Enable Keycloak authentication (set `false` for single user/no auth) |
| `spec.feature_isolate_namespace` | `true` | Enable namespace isolation via network policies |
| `spec.feature_analysis_archiver` | `true` | Automatically archive old analysis reports when a new one is created |
| `spec.rwx_supported` | `true` | Whether RWX volumes are supported in the cluster |
| `spec.hub_database_volume_size` | `5Gi` | Size requested for Hub database volume |
| `spec.hub_bucket_volume_size` | `100Gi` | Size requested for Hub bucket volume |
| `spec.keycloak_database_data_volume_size` | `1Gi` | Size requested for Keycloak DB volume |
| `spec.cache_data_volume_size` | `100Gi` | Size requested for Tackle Cache volume |
| `spec.cache_storage_class` | N/A | StorageClass requested for Tackle Cache volume |
| `spec.hub_bucket_storage_class` | N/A | StorageClass requested for Tackle Hub Bucket volume (RWX) |
| `spec.rwo_storage_class` | N/A | StorageClass requested for RWO database volumes |

## Access Tackle UI

### Quick Access via Port-Forward

1. Set up port forwarding:

```bash
kubectl -n konveyor-tackle port-forward service/tackle-ui 8080:8080
```

2. Open [http://127.0.0.1:8080](http://127.0.0.1:8080) in your browser.

### Initialize Admin Account

The built-in Keycloak generates a random password on startup. This is the Keycloak root password, stored in the `tackle-keycloak-sso` secret.

1. Retrieve Keycloak admin credentials:

```bash
# Get username (default: admin)
kubectl -n konveyor-tackle get secret tackle-keycloak-sso -o jsonpath='{.data.username}' | base64 -d

# Get password
kubectl -n konveyor-tackle get secret tackle-keycloak-sso -o jsonpath='{.data.password}' | base64 -d
```

2. Login to Keycloak admin console at [http://127.0.0.1:8080/auth/admin/](http://127.0.0.1:8080/auth/admin/)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Keycloak context path /auth removal version history

💡 Result:

Short timeline and notes

  • Change introduced when Keycloak switched to the Quarkus distribution (default starting in Keycloak 17). The Quarkus build removed the default /auth context path. To restore it, set http-relative-path (e.g. bin/kc.sh start-dev --http-relative-path /auth). [1]

  • This change is documented in the official migration guide and referenced in code/adapter updates noting “Starting from Keycloak release v17, … the "/auth" prefix was removed from the default context path.” [1][2]

Sources
[1] Keycloak — Migrating to Quarkus distribution (notes about /auth removal and http-relative-path).
[2] Adapter/commit notes referencing removal starting with v17 (adapting endpoints for Quarkus runtime).


🌐 Web query:

Konveyor Tackle Keycloak version bundled embedded

💡 Result:

Short answer: Tackle does not ship a single fixed “bundled/embedded” Keycloak version. Konveyor Tackle integrates with Keycloak (or can be deployed authless) and expects you to provide/connect a Keycloak instance via the Tackle CR / environment variables; it doesn’t embed a specific Keycloak release inside the Tackle operator/Hub image. See Konveyor install/docs and the Tackle settings (Keycloak-related env vars). [1][2]

Sources:

  • Konveyor Tackle docs — installation / authless deployment info. [1]
  • Tackle settings (Keycloak env vars) in the tackle2-hub source. [2]

🌐 Web query:

Keycloak 18 admin console URL path /admin/ vs /auth/admin/

💡 Result:

Short answer

  • Keycloak 18 (Quarkus distribution) serves the admin console at /admin (root context) by default; older WildFly-based releases used /auth/admin. (keycloak.org)

If you need the /auth prefix

  • Start Keycloak with an HTTP relative path: --http-relative-path=/auth (or set KC_HTTP_RELATIVE_PATH=/auth / KC_HTTP_RELATIVE_PATH env). That restores URLs like /auth/admin/ (some resources/behavior may require additional hostname/proxy settings). (forum.keycloak.org)

Notes / troubleshooting pointers

  • Reverse-proxy/hostname-path settings (KC_HOSTNAME, --hostname-path, KC_HOSTNAME_ADMIN, KC_HOSTNAME_URL, etc.) affect generated admin links and static resource loading; many upgrade issues stem from mismatched hostname/path configuration. Check the upgrade docs and GitHub/forum issues if the console breaks after changing paths. (github.com)

If you want, tell me how you run Keycloak (docker/standalone/hosted) and I’ll give the exact command/env and example reverse-proxy rules.

Citations:


Clarify Keycloak admin console URL path based on version and configuration.

The URL /auth/admin/ assumes Keycloak is configured with the /auth context path. This works for WildFly-based Keycloak or Quarkus-based Keycloak with http-relative-path=/auth enabled. However, modern Keycloak 18+ (Quarkus) defaults to /admin/ without additional configuration. Since Tackle requires external Keycloak deployment, the documentation should either specify the assumed Keycloak version/configuration or provide adaptive instructions for both paths (e.g., "If using Keycloak 18+, use http://127.0.0.1:8080/admin/; for older versions or with /auth configured, use http://127.0.0.1:8080/auth/admin/").

🤖 Prompt for AI Agents
In `@docs/en/solutions/How_to_Use_Konveyor.md` at line 114, Update the Keycloak
admin console URL line to clarify path differences based on Keycloak
version/config: mention that Keycloak 18+ (Quarkus default) uses
http://127.0.0.1:8080/admin/ while older WildFly-based or Quarkus with
http-relative-path=/auth uses http://127.0.0.1:8080/auth/admin/, and instruct
readers to choose the correct URL for their Keycloak deployment or check their
server's http-relative-path setting; adjust the sentence that currently reads
"Login to Keycloak admin console at http://127.0.0.1:8080/auth/admin/" to
include both alternatives and a brief note about Tackle requiring an external
Keycloak deployment.


3. Reset the Tackle admin password:
- Select the **tackle** Realm from the dropdown (not Master Realm)
- Click **Users** in the left menu
- Find and select the **admin** user
- Click the **Credentials** tab
- Enter a new password (e.g., `admin@123`)
- Disable **Temporary** toggle
- Click **Reset Password**

4. Login to Tackle at [http://127.0.0.1:8080](http://127.0.0.1:8080) using the admin user and new password.

### Secure Access via Ingress (Production)

Port-forward is only for temporary access. For production, configure Ingress with TLS.

#### Ingress Prerequisites

- A domain name (e.g., `tackle.example.com`)
- LoadBalancer service deployed (see [ALB deployment guide](https://docs.alauda.io/container_platform/4.1/configure/networking/how_to/alb/deploy_alb.html))
- cert-manager installed

#### Create TLS Certificate

```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: tackle-ssl-cert
namespace: konveyor-tackle
spec:
commonName: tackle.example.com
dnsNames:
- tackle.example.com
issuerRef:
kind: ClusterIssuer
name: cpaas-ca # Replace with your Issuer
secretName: tackle-tls-secret
usages:
- server auth
- client auth
```

#### Create Ingress

```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: tackle-ui-tls-ingress
namespace: konveyor-tackle
spec:
ingressClassName: nginx # Replace with your Ingress Class
rules:
- host: tackle.example.com
http:
paths:
- backend:
service:
name: tackle-ui
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- tackle.example.com
secretName: tackle-tls-secret
```

> [!NOTE]
> Replace `tackle.example.com` with your actual domain.

Access Tackle at `https://tackle.example.com`.

## Enable KAI (Konveyor AI)

KAI uses AI services to provide AI-powered code migration assistance. It supports multiple providers and models.

### Supported Providers and Models

| Provider (`kai_llm_provider`) | Model (`kai_llm_model`) |
| --- | --- |
| `openai` | `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo` |
| `azure_openai` | `gpt-4`, `gpt-35-turbo` |
| `bedrock` | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0` |
| `google` | `gemini-2.0-flash-exp`, `gemini-1.5-pro` |
| `ollama` | `llama3.1`, `codellama`, `mistral` |
| `groq` | `llama-3.1-70b-versatile`, `mixtral-8x7b-32768` |
| `anthropic` | `claude-3-5-sonnet-20241022`, `claude-3-haiku-20240307` |

### Enable KAI in Tackle

1. Update the Tackle configuration:

```yaml
apiVersion: tackle.konveyor.io/v1alpha1
kind: Tackle
metadata:
name: tackle
namespace: konveyor-tackle
spec:
kai_solution_server_enabled: true
kai_llm_provider: openai # Choose your provider
kai_llm_model: gpt-4o-mini # Choose your model
```

2. Create API credentials secret:

**For OpenAI:**

```bash
kubectl create secret generic kai-api-keys -n konveyor-tackle \
--from-literal=OPENAI_API_BASE='https://api.openai.com/v1' \
--from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
```

**For Google:**

```bash
kubectl create secret generic kai-api-keys -n konveyor-tackle \
--from-literal=GOOGLE_API_KEY='<YOUR_GOOGLE_API_KEY>'
```

3. Force the operator to reconcile and pick up the new credentials:

```bash
kubectl patch tackle tackle -n konveyor-tackle --type=merge -p \
'{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
```

## Konveyor Components Overview

Konveyor provides a modular architecture for application modernization:

| Component | Description |
| --- | --- |
| **Konveyor Hub** | Central control plane providing unified application inventory, assessment module (risk evaluation), and analysis module (static code analysis). Implements RBAC with Administrator, Architect, and Migrator roles. |
| **Kantra & Analyzer-LSP** | CLI tool for offline static analysis. Analyzer-LSP integrates into IDEs (VSCode) via Language Server Protocol for real-time migration issue detection. |
| **Konveyor AI (KAI)** | RAG-based AI assistant for automated code remediation. Uses Solved Incident Store for context-aware code patch generation. |
| **Move2Kube** | Automates conversion from Cloud Foundry/OpenShift to Kubernetes. Three phases: Collect, Plan, Transform. Generates Dockerfiles, K8s manifests, Helm Charts, and Tekton Pipelines. |
| **Forklift** | VM migration tool for moving VMs from VMware vSphere, oVirt, or OpenStack to KubeVirt. |
| **Crane** | Kubernetes-to-Kubernetes migration tool for cluster upgrades or cross-distribution migrations. Handles PV data sync with Restic or VolSync. |

## Reference

- [Konveyor Official Documentation](https://konveyor.io/docs/konveyor/)
- [Konveyor Admin Tasks](https://konveyor.io/docs/konveyor/admintasks/)
- [Konveyor Operator Repository](https://github.com/konveyor/operator)