Imagine two cameras setup, side-by-side, with the same model lens at the same focal length, f-stop, shutter speed and ISO, centered on the same object in the distance. One is a full-frame camera and the other is an APS-C body. You make a photo with each camera.
Let's compare the photos, specifically their respective depths of field. A common definition of depth of field is the range of distances in a photo within which the smallest detail looks acceptably sharp.
If we adopt this for the comparison, there will be a nearest and farthest point in the photo made with the full-frame camera within which the smallest details look acceptably sharp.
The photo made with the APS-C camera will be centered and focused on the same point as the first photo. Having been made at the same focal length but being APS-C, the crop factor will have produced a photo having a more narrow angle of view.
Here's the important part: if we view the second (APS-C) photo at the same size and distance - either on a computer screen or as a framed print - anything visible in the frame will be magnified in size compared with the same element in the first (full-frame) photo.
As a result, the nearest small detail in the full-frame photo that looked focused will be larger and look soft. Similarly, the most distant small detail that's sharp enough in the full-frame photo will also be larger and look soft.
Depth of field in the APS-C photo will be shallower than DOF in the full-frame photo and the factor producing that difference, is sensor size.