misc: Update to new repository locations

Update all references to the new repository locations. Many of these will
redirect however the one used for the hypervisor-fw binary does not so
this is required to allow the builds to pass.

Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit is contained in:
Rob Bradford 2019-11-21 10:05:30 +00:00
parent 64305dab16
commit 8ec89bc884
6 changed files with 22 additions and 22 deletions

View File

@ -51,7 +51,7 @@ Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Cloud Hypervisor uses the “fork-and-pull” development model. Follow these steps if
you want to merge your changes to `cloud-hypervisor`:
1. Fork the [cloud-hypervisor](https://github.com/intel/cloud-hypervisor) project
1. Fork the [cloud-hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) project
into your github organization.
2. Within your fork, create a branch for your contribution.
3. [Create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/)
@ -65,7 +65,7 @@ you want to merge your changes to `cloud-hypervisor`:
## Issue tracking
If you have a problem, please let us know. We recommend using
[github issues](https://github.com/intel/cloud-hypervisor/issues/new) for formally
[github issues](https://github.com/cloud-hypervisor/cloud-hypervisor/issues/new) for formally
reporting and documenting them.
To quickly and informally bring something up to us, you can also reach out on [Slack](https://cloud-hypervisor.slack.com).

View File

@ -1,4 +1,4 @@
[![Build Status](https://travis-ci.com/intel/cloud-hypervisor.svg?branch=master)](https://travis-ci.com/intel/cloud-hypervisor)
[![Build Status](https://travis-ci.com/cloud-hypervisor/cloud-hypervisor.svg?branch=master)](https://travis-ci.com/cloud-hypervisor/cloud-hypervisor)
1. [What is Cloud Hypervisor?](#1-what-is-cloud-hypervisor)
* [Requirements](#requirements)
@ -74,7 +74,7 @@ First you need to clone and build the cloud-hypervisor repo:
```shell
$ pushd $CLOUDH
$ git clone https://github.com/intel/cloud-hypervisor.git
$ git clone https://github.com/cloud-hypervisor/cloud-hypervisor.git
$ cd cloud-hypervisor
$ cargo build --release
@ -95,7 +95,7 @@ You can run a guest VM by either using an existing cloud image or booting into y
`cloud-hypervisor` supports booting disk images containing all needed
components to run cloud workloads, a.k.a. cloud images. To do that we rely on
the [Rust Hypervisor
Firmware](https://github.com/intel/rust-hypervisor-firmware) project to provide
Firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware) project to provide
an ELF
formatted KVM firmware for `cloud-hypervisor` to directly boot into.
@ -105,7 +105,7 @@ We need to get the latest `rust-hypervisor-firmware` release and also a working
$ pushd $CLOUDH
$ wget https://download.clearlinux.org/releases/29160/clear/clear-29160-kvm.img.xz
$ unxz clear-29160-kvm.img.xz
$ wget https://github.com/intel/rust-hypervisor-firmware/releases/download/0.2.0/hypervisor-fw
$ wget https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.2.0/hypervisor-fw
$ popd
```
@ -207,12 +207,12 @@ Clear Linux root partitions, and also basic initrd/initramfs images.
## Device Model
Follow this [documentation](https://github.com/intel/cloud-hypervisor/blob/master/docs/device_model.md).
Follow this [documentation](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/device_model.md).
## TODO
We are not tracking the `cloud-hypervisor` TODO list from a specific git tracked file but through
[github issues](https://github.com/intel/cloud-hypervisor/issues/new) instead.
[github issues](https://github.com/cloud-hypervisor/cloud-hypervisor/issues/new) instead.
# 4. `rust-vmm` project dependency

View File

@ -104,7 +104,7 @@ selecting `--serial tty --console off` from the command line.
### virtio-iommu
As we want to improve our nested guests support, we added support for exposing
a [paravirtualized IOMMU](https://github.com/intel/cloud-hypervisor/blob/master/docs/iommu.md)
a [paravirtualized IOMMU](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/iommu.md)
device through virtio. This allows for a safer nested virtio and directly
assigned devices support.
@ -175,7 +175,7 @@ flag `--vhost-user-blk`.
shared file system, allowing for an efficient and reliable way of sharing
a filesystem between the host and the cloud-hypervisor guest.
See our [filesystem sharing](https://github.com/intel/cloud-hypervisor/blob/master/docs/fs.md)
See our [filesystem sharing](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/fs.md)
documentation for more details on how to use virtio-fs with cloud-hypervisor.
This device is always built-in, and it is enabled based on the presence of the
@ -197,7 +197,7 @@ VFIO (Virtual Function I/O) is a kernel framework that exposes direct device
access to userspace. `cloud-hypervisor` uses VFIO to directly assign host
physical devices into its guest.
See our [VFIO documentation](https://github.com/intel/cloud-hypervisor/blob/master/docs/vfio.md)
See our [VFIO documentation](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/vfio.md)
for more details on how to directly assign host devices to `cloud-hypervisor`
guests.

View File

@ -12,7 +12,7 @@ This virtual device relies on the _vhost-user_ protocol, which assumes the backe
_Install virtiofsd_
```bash
VIRTIOFSD_URL="$(curl --silent https://api.github.com/repos/intel/nemu/releases/latest | grep "browser_download_url" | grep "virtiofsd-x86_64" | grep -o 'https://.*[^ "]')"
VIRTIOFSD_URL="$(curl --silent https://api.github.com/repos/cloud-hypervisor/nemu/releases/latest | grep "browser_download_url" | grep "virtiofsd-x86_64" | grep -o 'https://.*[^ "]')"
wget --quiet $VIRTIOFSD_URL -O "virtiofsd"
chmod +x "virtiofsd"
sudo setcap cap_sys_admin+epi "virtiofsd"

View File

@ -26,7 +26,7 @@
# v0.3.0
This release has been tracked through the [0.3.0 project](https://github.com/intel/cloud-hypervisor/projects/3).
This release has been tracked through the [0.3.0 project](https://github.com/cloud-hypervisor/cloud-hypervisor/projects/3).
Highlights for `cloud-hypervisor` version 0.3.0 include:
@ -73,7 +73,7 @@ configurations that do not require a PCI bus emulation.
### Paravirtualized IOMMU
As we want to improve our nested guests support, we added support for exposing
a [paravirtualized IOMMU](https://github.com/intel/cloud-hypervisor/blob/master/docs/iommu.md)
a [paravirtualized IOMMU](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/iommu.md)
device through virtio. This allows for a safer nested virtio and directly
assigned devices support.
@ -85,7 +85,7 @@ setting.
### Ubuntu 19.10
With the latest [hypervisor firmware](https://github.com/intel/rust-hypervisor-firmware),
With the latest [hypervisor firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware),
we can now support the latest
[Ubuntu 19.10 (Eoan Ermine)](http://releases.ubuntu.com/19.10/) cloud images.
@ -96,7 +96,7 @@ support guests with large amount of memory (more than 64GB).
# v0.2.0
This release has been tracked through the [0.2.0 project](https://github.com/intel/cloud-hypervisor/projects/2).
This release has been tracked through the [0.2.0 project](https://github.com/cloud-hypervisor/cloud-hypervisor/projects/2).
Highlights for `cloud-hypervisor` version 0.2.0 include:
@ -124,7 +124,7 @@ Based on the Firecracker idea of using a dedicated I/O port to measure guest
boot times, we added support for logging guest events through the
[0x80](https://www.intel.com/content/www/us/en/support/articles/000005500/boards-and-kits.html)
PC debug port. This allows, among other things, for granular guest boot time
measurements. See our [debug port documentation](https://github.com/intel/cloud-hypervisor/blob/master/docs/debug-port.md)
measurements. See our [debug port documentation](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/debug-port.md)
for more details.
### Improved direct device assignment
@ -144,13 +144,13 @@ memory footprint.
### Ubuntu bionic based CI
Thanks to our [simple KVM firmware](https://github.com/intel/rust-hypervisor-firmware)
Thanks to our [simple KVM firmware](https://github.com/cloud-hypervisor/rust-hypervisor-firmware)
improvements, we are now able to boot Ubuntu bionic images. We added those to
our CI pipeline.
# v0.1.0
This release has been tracked through the [0.1.0 project](https://github.com/intel/cloud-hypervisor/projects/1).
This release has been tracked through the [0.1.0 project](https://github.com/cloud-hypervisor/cloud-hypervisor/projects/1).
Highlights for `cloud-hypervisor` version 0.1.0 include:
@ -160,7 +160,7 @@ We added support for the [virtio-fs](https://virtio-fs.gitlab.io/) shared file
system, allowing for an efficient and reliable way of sharing a filesystem
between the host and the `cloud-hypervisor` guest.
See our [filesystem sharing](https://github.com/intel/cloud-hypervisor/blob/master/docs/fs.md)
See our [filesystem sharing](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/fs.md)
documentation for more details on how to use virtio-fs with `cloud-hypervisor`.
### Initial direct device assignment support
@ -169,7 +169,7 @@ VFIO (Virtual Function I/O) is a kernel framework that exposes direct device
access to userspace. `cloud-hypervisor` uses VFIO to directly assign host
physical devices into its guest.
See our [VFIO](https://github.com/intel/cloud-hypervisor/blob/master/docs/vfio.md)
See our [VFIO](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/docs/vfio.md)
documentation for more detail on how to directly assign host devices to
`cloud-hypervisor` guests.

View File

@ -6,7 +6,7 @@ source $HOME/.cargo/env
WORKLOADS_DIR="$HOME/workloads"
mkdir -p "$WORKLOADS_DIR"
FW_URL=$(curl --silent https://api.github.com/repos/intel/rust-hypervisor-firmware/releases/latest | grep "browser_download_url" | grep -o 'https://.*[^ "]')
FW_URL=$(curl --silent https://api.github.com/repos/cloud-hypervisor/rust-hypervisor-firmware/releases/latest | grep "browser_download_url" | grep -o 'https://.*[^ "]')
FW="$WORKLOADS_DIR/hypervisor-fw"
if [ ! -f "$FW" ]; then
pushd $WORKLOADS_DIR